Definition of the scope of CSRM;
Characterization of the facility;
Characterization of the threats;
Specification of requirements;
Verification and validation;
Acceptance by the competent authority.
| Sekce | Odstavec | Text |
|---|---|---|
| Main | 1.1. | Nuclear security seeks to prevent, detect and respond to criminal or intentional unauthorized acts involving or directed at nuclear and other radioactive material, associated facilities and associated activities. Nuclear security of nuclear material and nuclear facilities includes physical protection, personnel related security (e.g. trustworthiness determination, measures against insider threats) and information security. |
| Main | 1.2. | Groups or individuals planning or committing any malicious act involving nuclear material or a nuclear facility might benefit from access to sensitive information and sensitive information assets related to the material, the facility or the security measures in place. |
| Main | 1.3. | The Nuclear Security Fundamentals [1] and the three Nuclear Security Recommendations publications [2–4] all emphasize the importance of securing sensitive information. IAEA Nuclear Security Series No. 23‑G, Security of Nuclear Information [5], provides guidance on appropriate measures for the identification, classification and securing of sensitive information to achieve effective information security within the State’s nuclear security regime. |
| Main | 1.4. | Cyber‑attacks at nuclear facilities can contribute to causing physical damage to the facility and/or disabling its security or safety systems (i.e. sabotage), to obtaining unauthorized access to sensitive nuclear information, or to achieving unauthorized removal of nuclear material. Computer security is therefore vital at nuclear facilities to protect both nuclear security and nuclear safety. |
| Main | 1.5. | The protection of sensitive digital assets1 (SDAs) is recommended in para. 4.10 of Ref. [2], which states: |
| Main | 1.6. | General guidance on computer security for nuclear security is provided in IAEA Nuclear Security Series No. 42‑G, Computer Security for Nuclear Security [7], and more specific guidance on computer security of instrumentation and control (I&C) systems in nuclear facilities is provided in IAEA Nuclear Security Series No. 33‑T, Computer Security of Instrumentation and Control Systems at Nuclear Facilities [8]. The current publication is intended to complement this guidance by providing details of computer security techniques for other systems at nuclear facilities. |
| Main | 1.7. | The objective of this publication is to assist Member States in implementing computer security at nuclear facilities with the aim of preventing and protecting against unauthorized removal of nuclear material, sabotage of nuclear facilities and unauthorized access to sensitive nuclear information. This publication addresses computer security for supporting activities and organizations such as vendors, contractors and suppliers. While the focus of this publication is on the security of nuclear facilities, application of this guidance may also benefit facility safety and operational performance. |
| Main | 1.8. | This publication addresses the use of risk informed approaches to establish and enhance computer security policies, programmes and measures to protect SDAs and other digital assets. A nuclear facility relies on SDAs and other digital assets for the safety and security of the facility. This publication describes the integration of computer security into the management system of a facility or organization, and it includes guidance on defining policy and requirements and on activities to develop, implement, sustain, maintain, assess and continually improve the computer security measures that protect the facility from cyber‑attacks consistent with the threat assessment or design basis threat (DBT) [9]. |
| Main | 1.9. | This publication also provides technical guidance on protecting other digital assets at nuclear facilities. |
| Main | 1.10. | This publication is intended for regulatory bodies and other competent authorities and for operators of nuclear facilities and their vendors, contractors and suppliers. |
| Main | 1.11. | The guidance in this publication applies to the implementation and management of computer security for nuclear security purposes at nuclear facilities. This publication is applicable to all stages in the lifetime of a nuclear facility [10]. |
| Main | 1.12. | Computer security at nuclear facilities is intended to protect a range of systems that contribute to different aspects of nuclear security, such as physical protection and nuclear material accounting and control systems. This publication does not address the design or operation of such systems, except as design or operation relates to the protection of those systems by computer security measures. |
| Main | 1.13. | This publication addresses all digital assets associated with a nuclear facility, including the facility’s I&C systems. Additional guidance on specific computer security considerations for the facility’s I&C systems that provide safety, security or auxiliary functions is provided in Ref. [8]. |
| Main | 1.14. | Following this introduction, Section 2 introduces key terminology, basic concepts and relationships. Section 3 describes general considerations for computer security in nuclear facilities. Sections 4 and 5 present guidance on computer security risk management (CSRM) at the facility and system levels, respectively. Section 6 presents guidance on considerations for facility and system CSRM relevant to different stages in the lifetime of the facility. Section 7 presents an overview of a computer security programme (CSP). Section 8 presents an illustrative example of the implementation of defensive computer security architecture (DCSA) and associated computer security measures. |
| Main | 1.15. | The Appendix provides specific guidance on selected elements of a CSP. Annex I provides example attack scenarios that can be used to evaluate computer security at nuclear facilities. Annex II provides an example of the assignment of computer security levels for a nuclear power plant. Annex III provides an example of the application of computer security levels and zones. |
| Main | 2.1. | This section clarifies the meaning of important terms that are used throughout this publication. |
| Main | 2.2. | The Nuclear Security Fundamentals [1] state that the targets with respect to nuclear security are the following: |
| Main | 2.3. | Reference [1] states that a nuclear security system is “An integrated set of nuclear security measures.” Nuclear security measures are defined as follows: |
| Main | 2.4. | The general guidance on computer security [7] states: “The State should develop and maintain a national computer security strategy as part of its nuclear security regime”. As nuclear facilities are within the nuclear security regime, computer security at these facilities needs to be included in that national computer security strategy. Facility functions that support safety and security need to be protected from adversaries. When these facility functions make use of, depend on or are supported by digital technologies, computer security is needed to protect these functions. |
| Main | 2.5. | Computer security is concerned with computer based systems, especially those systems that perform or support facility functions important or related to nuclear security and nuclear safety (i.e. digital assets). Computer security provides techniques and tools to defend against cyber‑attacks and against human actions or omissions that might affect security. |
| Facility functions, computer security levels and computer security zones | 2.6. | A standard approach to protect systems in a structured way according to a graded approach is to use the concepts of computer security levels and computer security zones. The computer security level assigned to a computer security zone is based on the highest degree of security protection required by any facility function performed by a system within that zone. The same computer security level is assigned to all systems within that zone. Typically, a nuclear facility zone model consists of many different zones, and several zones may have the same computer security level assigned. |
| Facility functions, computer security levels and computer security zones | 2.7. | A facility function is a coordinated set of actions and processes that need to be performed at a nuclear facility. Facility functions include functions that are important or related to nuclear security and functions that are important or related to nuclear safety (i.e. safety functions).2 Facility functions are assigned to systems3, each of which performs one or more of these functions. |
| Facility functions, computer security levels and computer security zones | 2.8. | A computer security level is a designation that indicates the degree of security protection required for a facility function and consequently for the system that performs that function. Each computer security level is associated with a set of requirements imposed by the operator to ensure that the appropriate level of protection is provided to digital assets assigned to that level using a graded approach. Each computer security level will need different sets of computer security measures to satisfy the computer security requirements for that level. |
| Facility functions, computer security levels and computer security zones | 2.9. | A computer security zone is a logical and/or physical grouping of digital assets that are assigned to the same computer security level and that share common computer security requirements owing to inherent properties of the systems or their connections to other systems (and, if necessary, additional criteria). The use of computer security zones is intended to simplify the administration, communication and application of computer security measures.4 |
| Facility functions, computer security levels and computer security zones | 2.10. | Additional criteria for defining computer security zones may include the following:
|
| Facility functions, computer security levels and computer security zones | 2.11. | The idealized relationships between the concepts of facility function(s), computer security level(s), system(s) and computer security zone(s) are illustrated in Fig. 1. |
| Facility functions, computer security levels and computer security zones | 2.12. | Each of the idealized relationships is labelled in Fig. 1, and the labelled text below describes each relationship:
|
| Computer security risk management | 2.13. | Facility CSRM (see Section 4) addresses facility functions and determines the assignment of these functions to computer security levels and to one or more systems. Systems inherit the computer security levels of the functions assigned to them. |
| Computer security risk management | 2.14. | System CSRM (see Section 5) is part of facility CSRM and addresses systems and determines (a) the boundaries of computer security zones according to the facility functions performed and system connectivity as well as (b) the computer security measures to be applied to meet the requirements for the computer security level of the zone. |
| Computer security risk management | 2.15. | Outputs of risk management processes typically rely on scenario development, analysis and, in some instances, performance to increase confidence in the qualitative assessments. There are two categories of scenarios: functional and technical. Functional scenarios are generally used in the facility CSRM process, and technical scenarios are used in the system CSRM process. |
| Competing demands of simplicity, efficiency and computer security | 2.16. | The competing demands of simplicity, efficiency and computer security need to be balanced when considering the following:
|
| Competing demands of simplicity, efficiency and computer security | 2.17. | Considerations of simplicity might lead to a preference to assign a single function to a single system. This might result in a DCSA that allows for the tailoring of efficient computer security measures within each zone for each facility function (assuming a one‑to‑one relationship between systems and functions). However, the systems would need interconnections to enable integration of separated facility functions, and therefore the system of computer security levels and computer security zones might become more complex owing to the larger number of computer security zones and interconnections between these zones. |
| Competing demands of simplicity, efficiency and computer security | 2.18. | However, considerations of efficiency in the performance of facility functions by systems might lead to a preference to assign multiple functions to a single integrated system. While this might result in a smaller number of computer security zones, the complexity of the system might increase, making it difficult to apply effective computer security measures throughout these zones. Additionally, assigning to the computer security zone a computer security level appropriate for the most important function of the system might further reduce efficiency because a higher level of protection than necessary might be applied to less important functions that have been integrated into the system. |
| Competing demands of simplicity, efficiency and computer security | 2.19. | The balance between efficiency and simplicity can also include balancing the performance of facility functions through systems, with the assignment of systems to computer security zones and computer security levels. Therefore, CSRM will typically involve a number of iterations of defining computer security zones and associated computer security measures to find the optimal balance between simplicity and efficiency. Iterations will need to show that proposed modifications of computer security zone definitions will not allow a compromise of the facility functions that would lead to worse consequences. |
| Conceptual nuclear facility zone model | 2.20. | An example of a conceptual nuclear facility zone model is shown in Fig. 2, with the following characteristics:
|
| Conceptual nuclear facility zone model | 2.21. | Figure 2 illustrates a conceptual application of systems, computer security levels and computer security zones. The computer security level assigned has the following impact on the requirements for the facility functions, systems and computer security zones:
|
| Conceptual nuclear facility zone model | 2.22. | The rigour with which computer security zones are defined may depend on the security levels assigned to those zones. For example, for zone Z1A, both the physical and logical boundaries are strictly defined, whereas zone Z5A might only need strict definition of the logical boundary, and the physical boundary might be more loosely defined (e.g. within a data centre, cloud service or corporate office). |
| Conceptual nuclear facility zone model | 2.23. | System boundaries (logical and physical) can be useful in defining computer security zone boundaries. In practice, a zone may comprise one or more systems, each system comprising or supported by one or more digital assets to perform or support the assigned facility function.9 |
| Conceptual nuclear facility zone model | 2.24. | Computer security zone boundaries generally have physical access control (e.g. locked cabinets, barriers, port blockers) and decoupling mechanisms for data flow (e.g. packet filters, firewalls, data diodes) to prevent cyber‑attacks or other forms of unauthorized access and to prevent errors propagating from one zone to another (especially from a zone with less stringent protection requirements to one with more stringent requirements). |
| Conceptual nuclear facility zone model | 2.25. | The zone model provides for a graded approach and defence in depth. A cyber‑attack originating outside the facility would need to defeat or bypass several layers of computer security measures before having the opportunity to compromise a system with computer security level 1, 2 or 3. The measures for computer security levels 4 and 5 can also contribute to the protection of the levels of higher protection.10 For example, providing early detection capabilities within zones assigned security level 4 or 5 would be advantageous in providing an opportunity to contain and mitigate the cyber‑attack before there is any impact on SDAs in levels 1, 2 or 3. |
| Conceptual nuclear facility zone model | 2.26. | In a graded approach, the strength of computer security measures put in place to protect a facility function is in direct proportion to the potential worst case consequences of a compromise of the facility function. |
| Conceptual nuclear facility zone model | 2.27. | Computer security measures are used for the following:
|
| Conceptual nuclear facility zone model | 2.28. | Computer security measures may also be used for the following:
|
| Conceptual nuclear facility zone model | 2.29. | Computer security measures can be assigned to one of three categories: technical control measures, physical control measures or administrative control measures (see Ref. [7]). |
| Conceptual nuclear facility zone model | 2.30. | Computer security measures might also contribute towards or be supported by other measures implemented for physical protection, personnel related security and information security. Section 8 provides an example of the application of computer security measures within a DCSA that has five levels. |
| Conceptual nuclear facility zone model | 2.31. | Computer based systems make use of, depend on or are supported by digital technologies. Computer based systems play an ever‑expanding role in the performance of important facility functions at nuclear facilities and associated operations. Increasingly, computer based systems are integrated into new designs and may be introduced into existing facilities during modernization or to increase productivity or reliability. |
| Conceptual nuclear facility zone model | 2.32. | Computer based systems are technologies that create, provide access to, compute, communicate or store digital information, or perform, provide or control services involving such information. These systems may be physical or virtual. These systems include desktops, laptops, tablets, other personal computers, smartphones, mainframes, servers, software applications, databases, removable media, digital I&C devices, programmable logic controllers, printers, network devices, and embedded components and devices. Some computer based systems are programmable, which provides the option to modify processing steps without changing the hardware. Computer based systems are susceptible to cyber‑attacks. |
| Conceptual nuclear facility zone model | 2.33. | In the context of this publication, the term ‘digital asset’ refers to a computer based system that is associated with a nuclear facility. Any digital asset that has an important role in the safety or security of a nuclear facility will be considered an SDA11. |
| Conceptual nuclear facility zone model | 2.34. | Computer security is concerned with the protection of computer based systems against compromise.12 Computer security is a subset of information security (as defined, for example, in ISO/IEC 27000 [11]) and shares many of the same goals, methodologies and terminology. |
| Conceptual nuclear facility zone model | 2.35. | The relationship between information security, sensitive information, sensitive information assets, digital assets and SDAs is shown in Fig. 3. |
| Conceptual nuclear facility zone model | 2.36. | A cyber‑attack is a malicious act with the intention of stealing, altering, preventing access to or destroying a specified target through unauthorized access to (or actions within) a susceptible system [8]. A cyber‑attack can be carried out by individuals or organizations and might target sensitive information or sensitive information assets. Cyber‑attacks have the following special characteristics:
|
| Conceptual nuclear facility zone model | 2.37. | Compromise of digital assets might provide pathways for, facilitate or assist in cyber‑attacks targeting SDAs, with a corresponding adverse impact on nuclear security and nuclear safety. Therefore, it is necessary to provide appropriate protection — based on a graded approach and defence in depth — to all digital assets associated with the facility to prevent their use in the compromise of SDAs.The compromise of an SDA degrades nuclear security and might result in a nuclear security event13 with consequences ranging as follows (from best to worst case):
|
| Conceptual nuclear facility zone model | 2.38. | The capabilities of potential adversaries might include the effective use of cyber‑attacks. Therefore, SDAs are targets both for their effect on facility functions and as a means for adversaries to facilitate and achieve their goals, and might be specifically targeted. |
| Conceptual nuclear facility zone model | 2.39. | A safety function is “A specific purpose that must be accomplished for safety” [12]. Safety functions are necessary “for a facility or activity to prevent or to mitigate radiological consequences of normal operation, anticipated operational occurrences and accident conditions” [12]. |
| Conceptual nuclear facility zone model | 2.40. | For example, the fundamental safety functions that are required for all plant states (Requirement 4 of IAEA Safety Standards Series No. SSR‑2/1 (Rev. 1), Safety of Nuclear Power Plants: Design [13]) are as follows:
|
| Conceptual nuclear facility zone model | 2.41. | Paragraph 3.46 of Ref. [2] identifies physical protection functions as detection, delay and response. Physical protection functions use defence in depth and apply a graded approach to provide appropriate effective protection. |
| Conceptual nuclear facility zone model | 2.42. | Physical protection functions and safety functions are not necessarily inherently related to each other, making it difficult to treat safety functions and physical protection functions coherently in risk assessment methodologies. Therefore, describing and designating facility functions important or related to security in a manner similar to facility functions important or related to safety (i.e. safety functions) will simplify the determination of the significance of facility functions and will enable the equal treatment of safety functions and security functions of equivalent significance. Some examples of facility functions important to security are the following:
|
| Conceptual nuclear facility zone model | 3.1. | Reference [7] states: |
| Conceptual nuclear facility zone model | 3.2. | The operator should identify and list the facility functions for the entire facility in a consistent manner to ensure that the identified set of facility functions can be assessed holistically. The operator should provide the list of identified facility functions to the competent authority14 consistent with national regulations. The computer security requirements15 for these facility functions should be considered, whatever the means of performing the functions (e.g. the specific technology employed, whether analogue or digital). |
| Conceptual nuclear facility zone model | 3.3. | The performance of facility functions will rely on or be supported by related sensitive information, sensitive information assets and other associated digital assets. |
| Conceptual nuclear facility zone model | 3.4. | The operator should apply computer security measures to ensure the appropriate protection (including traceability) of sensitive information, sensitive information assets and SDAs. Computer security is provided by measures to ensure confidentiality, integrity and availability as well as to meet any other requirements specified by the competent authority. |
| Conceptual nuclear facility zone model | 3.5. | The operator should identify sensitive information, taking into account the effects of its compromise and the State’s requirements for the security of sensitive information. Reference [5] provides detailed guidance on the development of a State’s requirements for sensitive information. |
| Conceptual nuclear facility zone model | 3.6. | Sensitive information may be identified directly by considering the potential consequences associated with its unauthorized disclosure (as indicated in Ref. [5]), for example information on security arrangements, which an adversary might use in planning a malicious act. For this type of information, confidentiality is typically the attribute that most needs protection. Sensitive information may also be identified less directly by considering its functional significance (i.e. its importance to the provision or performance of a facility function), for example accurate and timely data on boiler pressure, which an adversary might be more likely to exploit by modifying or destroying. For this type of information, the integrity and availability of the information might be at least as important as confidentiality. |
| Conceptual nuclear facility zone model | 3.7. | The information in the site security plan may be classified as sensitive information and measures may be implemented to protect its confidentiality for an extended period of time, since the information will remain sensitive throughout the period for which the site security plan is valid. |
| Conceptual nuclear facility zone model | 3.8. | For an I&C system and its process data, an operator might give priority to those measures that ensure system availability and integrity over those that ensure confidentiality. In this case, the process data are important to the correct performance and availability of the function and are only sensitive during the very limited intervals when the I&C system is performing a control action based on the data. However, once the process data are no longer important to the performance and availability of the function (i.e. can no longer form the basis of a control action), the historical process data have value only based on their sensitivity. Therefore, the security benefit arising from the increased assurance of confidentiality (to protect information sensitivity) needs to be balanced against that from protecting integrity and availability. |
| Conceptual nuclear facility zone model | 3.9. | While protecting the confidentiality of process data from these systems might not need stringent measures, the loss of confidentiality of other data related to the systems, such as administration passwords, source code and other key details, would provide the adversary with a significant benefit in the planning and execution of cyber‑attacks targeting the system and might lead to a need for stronger measures. Additionally, classification of the historical process data (e.g. logs) to limit their distribution (e.g. application of administrative control) might be necessary to reduce the risk of unauthorized disclosure to an acceptable level. |
| Conceptual nuclear facility zone model | 3.10. | Computer security should be implemented using a risk informed approach. Figure 4 of Ref. [7] provides an overview of a risk informed approach to computer security measures. |
| Conceptual nuclear facility zone model | 3.11. | Risk, in the computer security context, is the risk associated with an adversary exploiting the vulnerabilities of a digital asset or group of digital assets to commit or facilitate a malicious act. This risk is expressed as a combination of the likelihood of a successful attack and the severity of its consequences if it occurs. |
| Conceptual nuclear facility zone model | 3.12. | The operator should establish and implement a CSRM process (unless the management process is performed by the competent authority). The competent authority may specify policy requirements to be followed and may require that a specific risk assessment methodology be used, or it may agree to the use of an operator’s methodology [7]. The assessment process for a facility may follow the example of the organizational computer security risk assessment as described in paras 7.10–7.16 of Ref. [7]. |
| Conceptual nuclear facility zone model | 3.13. | The CSRM process should include a cyclical process for continual improvement16 in the management of risks associated with cyber‑attacks on the facility. |
| Conceptual nuclear facility zone model | 3.14. | Periodic and iterative risk assessments are used to support decision making within a risk management process. Computer security risk assessments are typically qualitative, involving relative metrics (e.g. high, medium, low), but could be quantitative if sufficiently reliable data were available.17 The results of risk assessments will assist in determining appropriate computer security requirements. |
| Conceptual nuclear facility zone model | 3.15. | The operator should perform CSRM for the facility to comply with regulatory requirements. Reference [7] indicates that this may include two complementary assessments, one at the organizational level and one at the system level, and such an approach should be adopted for complex, high hazard facilities, such as nuclear facilities. In the guidance in this publication, it is therefore assumed that CSRM for a nuclear facility (facility CSRM) includes a specific phase of risk assessment and management at the system level (system CSRM) (see Fig. 4). This implies two stages:
|
| Conceptual nuclear facility zone model | 3.16. | The operator should ensure independence between the teams responsible for performing overall CSRM to set the computer security requirements for the facility, those implementing the requirements and those validating that the requirements have been met. |
| Conceptual nuclear facility zone model | 3.17. | Risk management is relevant at all stages in the facility’s lifetime and throughout the life cycles of systems to inform the development, implementation and maintenance of computer security measures. Section 6 identifies risk management activities throughout the lifetime of a facility. |
| Conceptual nuclear facility zone model | 3.18. | A review of the risk assessment should be performed, and the risk assessment updated as necessary, in the following instances:
|
| Conceptual nuclear facility zone model | 3.19. | Regulatory activities related to facility security, such as licensing, inspection and enforcement, should include appropriate consideration of computer security. Records from the risk management process and the resulting decisions and actions should be available for review by the competent authority on request to allow it to assess whether regulatory requirements are met. |
| Conceptual nuclear facility zone model | 3.20. | The overall structure and approach for the risk management process should include the following:
|
| Conceptual nuclear facility zone model | 3.21. | Many methods exist for conducting risk assessment (see, for example, ISO/IEC 27005 [14]). Organizations need to choose a method and customize it to their specific organizational environment and objectives, while observing the need for separate facility and system level risk management. |
| Conceptual nuclear facility zone model | 3.22. | Computer security requirements and the design and implementation of measures to meet these requirements should be based on a graded approach, where computer security measures are applied in direct proportion to the potential consequences arising from compromise of the facility function. As indicated in Section 2, one practical way of applying a graded approach is to assign facility functions to computer security levels, where each computer security level is characterized by graded computer security requirements, and preventive and protective security measures can be selected to meet the requirements for the relevant level. Figure 5 illustrates the graded approach using computer security levels. |
| Conceptual nuclear facility zone model | 3.23. | While the requirements (e.g. explicit restrictions on communication between SDAs assigned to different levels) are fixed by the computer security levels, security measures (e.g. the specific type of firewall used to restrict such communications) can be chosen to protect digital assets (including SDAs) according to the architectural environment of the computer security level and the technology of the specific digital assets (including SDAs). |
| Conceptual nuclear facility zone model | 3.24. | In the computer security level approach, computer security requirements need to be defined for each level with the following considerations: (a) Generic requirements should be applied broadly throughout the facility and operating organization and may be applied to all digital assets. Generic requirements provide for improved nuclear security culture through a greater awareness of computer security. They also improve the computer security resilience and might provide additional defence in depth. Generic requirements cannot be credited with providing benefit to a specific computer security level or system because generic measures typically apply to a wide range of digital assets and cannot be relied on to be operated consistently and effectively.(b) Computer security levels are assigned, ranging from level 5 (least protection needed) to level 1 (most protection needed) (see Fig. 5). In this approach, systems containing SDAs would be in computer security levels 1–3, whereas systems in levels 4 and 5 contain other digital assets.(c) Computer security requirements are specified and applied according to the computer security levels assigned, in accordance with a graded approach. Computer security requirements should be based on defence in depth, whereby digital assets assigned to security levels affording higher protection do not rely solely on or implicitly trust digital assets or computer security measures of security levels with lower protection.(d) The computer security measures applied to meet the requirements for each computer security level should take into account the independence and diversity of the measures in order to reduce common vulnerabilities that could allow multiple layers of defence in depth to be bypassed or defeated. However, it might be necessary for some computer security measures applied in one computer security level to be repeated in other computer security levels.(e) With the application of a layered approach and defence in depth, computer security measures on lower levels can help protect the higher levels, especially with regard to early detection of cyber‑attack.(f) Computer based systems that are outside the control of the CSP are unassigned and should not be trusted by any digital asset at any computer security level. |
| Conceptual nuclear facility zone model | 3.25. | Section 8 provides guidance on computer security requirements for a graded approach using the example of five computer security levels plus generic computer security requirements. |
| Conceptual nuclear facility zone model | 4.1. | Facility CSRM is a complex process that should be performed by a multidisciplinary team of people who have skills and competencies in nuclear security, nuclear safety, operations, maintenance, computer security and engineering.18 This team might have a composition similar to that proposed for physical protection evaluations (see Ref. [15]). |
| Conceptual nuclear facility zone model | 4.2. | Facility CSRM is an iterative process that is conducted in phases. It might be necessary to review and modify assumptions, determinations or results from a previous phase on the basis of the results of a subsequent phase. Verification activities are expected to be performed between phases. |
| Conceptual nuclear facility zone model | 4.3. | The objective of facility CSRM is to assess and manage risks associated with cyber‑attacks that have the potential to degrade the nuclear security or nuclear safety of the facility. |
| Conceptual nuclear facility zone model | 4.4. | Facility CSRM should ensure that the regulatory requirements regarding computer security are met. |
| Conceptual nuclear facility zone model | 4.5. | Facility CSRM should take account of an assessment of identified adversaries who might attack the facility and their goals (e.g. sabotage, unauthorized removal of nuclear material or radioactive material, unauthorized access to sensitive information), including an evaluation of the attractiveness of targets19 in the facility to these adversaries. The State’s assessment of threats might be provided by the national threat statement or DBT20. |
| Conceptual nuclear facility zone model | 4.6. | Facility CSRM should include a determination of the significance of each facility function in accordance with that function’s importance to the operator’s objectives. These determinations may allow for the development of a hierarchical list21 of potential nuclear security events (from most severe to no consequence) resulting from compromise of a facility function22. Figure 7 of Ref. [7] may be used in the development of such a hierarchical list. |
| Conceptual nuclear facility zone model | 4.7. | Facility CSRM should include consideration of facility functions but not their technical implementation in systems and digital assets, which are considered in system CSRM (see Section 5). |
| Conceptual nuclear facility zone model | 4.8. | The use of a consistent approach to facility CSRM across all facilities within a State may assist the competent authorities in providing effective oversight with respect to the application of computer security at nuclear facilities. |
| Inputs to facility computer security risk management | 4.9. | The operator should use the following as inputs to facility CSRM:
|
| Phases of facility computer security risk management | 4.10. | The following are the phases of facility CSRM:
|
| Phases of facility computer security risk management | 4.11. | The phases of facility CSRM are shown in Fig. 6, which provides an overview of the facility CSRM process. These phases are described in more detail in the remainder of this section. |
| Phases of facility computer security risk management | 4.12. | There is one facility CSRM process per facility, within which there is a separate system CSRM process for each system. For a site that contains multiple facilities or for an organization that operates multiple facilities, there may be one process for the whole site or whole organization, resulting in one or more sets of facility CSRM output. In this case, the operator may decide how many sets of output to generate but should ensure that the process is comprehensively applied to each facility. |
| Phases of facility computer security risk management | 4.13. | The operator should identify the scope of facility CSRM, which will be the physical or logical extent of the facility functions and associated systems of concern for nuclear security. Considerations in defining the scope might include the facility’s physical perimeter; the locations of approved vendors, contractors and suppliers; the operating organization’s corporate offices; off‑site data centres; and any other strategic locations. The scope of assessment might also vary depending on the stage in the lifetime of the facility or the capability and maturity of the operating organization (see paras 5.26–5.29 of Ref. [7]). |
| Identification of facility functions | 4.14. | The operator should identify all facility functions without consideration of how those functions are performed. The presence and use of digital assets throughout the facility and throughout its lifetime make it likely that digital assets will be used to perform or support the majority of key tasks and activities related to facility functions. |
| Identification of facility functions | 4.15. | The stage in the lifetime of the facility [10] should be taken into account in characterizing the facility and identifying the facility functions. Different facility functions will be relevant at different lifetime stages, and their relative importance might change. |
| Identification of facility functions | 4.16. | Facility functions are characterized by the following elements:
|
| Intrinsic significance of facility functions | 4.17. | The significance of all facility functions should be compared in order to group together those that have similar significance, if possible using a common scale that includes both security and safety considerations. |
| Intrinsic significance of facility functions | 4.18. | For facility functions important or related to nuclear security, a classification scheme based on consequences for nuclear security, such as that outlined in Fig. 7 of Ref. [7], should be used to determine the significance of the function. |
| Intrinsic significance of facility functions | 4.19. | For facility functions important or related to nuclear safety, an established safety classification scheme may be used to determine the significance of the function. However, security considerations may necessitate the assignment of higher significance than indicated by a function’s safety classification. |
| Intrinsic significance of facility functions | 4.20. | The determination of the significance of facility functions should take into account that the performance of safety functions (by systems) may support security and the performance of security functions may support safety. As a result, the significance assigned to a safety function for computer security may differ from its safety class. |
| Intrinsic significance of facility functions | 4.21. | For example, a system providing a facility function of detecting radiation for the protection of personnel (a safety objective) may also provide for the detection of unauthorized removal of nuclear material (a nuclear security objective). Although the failure of the radiation protection function from a safety perspective might have limited consequences, the consequences of failure for nuclear security might be more severe. Therefore, the facility functions provided by the system in this example would be assigned a significance value on the basis of their importance to the nuclear security objectives. (Alternatively, the operator could choose to implement independent systems to separate the functions that support nuclear safety and nuclear security, and in this example the function supporting nuclear safety could be assigned lower significance.) |
| Potential effects of compromise of a system on facility function | 4.22. | In addition to considering the intrinsic significance of the facility function, the operator should consider the effects on facility function of compromise of the system intended to perform it. These effects are as follows (arranged from worst to best case):
|
| Potential effects of compromise of a system on facility function | 4.23. | A system intended to perform a facility function might mal‑operate in different ways when compromised, and the effects of this mal‑operation depend on the circumstances and environment at the time of the compromise, the nature of the cyber‑attack causing the compromise, and the significance of the facility function. For example, a system performing a less important facility function might, through interdependencies and interactions between the functions, be used to attack a system performing a more important function. |
| Potential effects of compromise of a system on facility function | 4.24. | For each system and each type of effect of compromise (i.e. mal‑operation), there will be different consequences for the facility. These consequences should be assessed, and the significance assigned to facility functions should be based on these potential consequences. When assessing consequences, loss of confidentiality, integrity or availability of sensitive information should be considered, as well as consequences related to unauthorized removal of material or sabotage of the facility. |
| Potential effects of compromise of a system on facility function | 4.25. | The significance assigned to a facility function should take into account whether the facility function can be defined in a way that is valid for all possible conditions or modes on which the facility function might depend. If the facility function cannot be bound in this way, the list of consequences might be incomplete and additional analysis or assignment of a higher significance value (using a conservative approach) may be needed. |
| Interdependencies between facility functions | 4.26. | The determination of the significance of a facility function should also take into account the potential consequences of compromise (or mal‑operation) on other facility functions that depend on it. Examples of such functional dependencies include the following:
|
| Interdependencies between facility functions | 4.27. | Analysis of interactions and interdependencies between facility functions might reveal that an important facility function has been omitted from the scope of the assessment. Dependencies might extend beyond the facility, for example the supply of water or power to the facility. Some functions provided by external organizations may need to be considered in the analysis of facility function dependencies. In this case, it may be necessary to revise the assessment scope to include those dependencies or to make changes at the facility that remove the dependencies. |
| Interdependencies between facility functions | 4.28. | Segregation of systems performing facility functions to limit the interactions and interdependencies between them might simplify the specification of computer security levels and requirements and might improve the effectiveness and efficiency of computer security measures. |
| Necessary timeliness and accuracy for facility function interdependencies | 4.29. | The determination of the significance of facility functions may also take into account the timeliness and accuracy with which one facility function needs to respond to another facility function. Timeliness can be considered in terms of requirements for the availability of sensitive information, and accuracy can be considered in terms of requirements for the integrity of such information:
|
| Target identification | 4.30. | A target is defined in Ref. [1] as follows: |
| Target identification | 4.31. | Some systems performing facility functions will be targets and should be identified from the list of facility functions produced during facility CSRM, using the definitions of vital areas [16] and sensitive information [5]. Whether such a system is considered a target does not alter the significance of the facility function, but it is an additional consideration when determining computer security requirements. |
| Target identification | 4.32. | Targets that are associated with important facility safety and security functions should be identified as SDAs through the process described in paras 3.6–3.9. These SDAs should also be analysed for the potential value of any associated sensitive information. This will ensure that the SDAs and their associated information are considered within the facility’s information security programme and CSP and are afforded the appropriate level of protection. |
| Documentation of facility functions | 4.33. | The operator should document all facility functions identified and assessed during facility CSRM. |
| Documentation of facility functions | 4.34. | Identification of all the functions within the facility depends on having complete and accurate records describing the interactions and interdependencies between functions. These records will allow for the assessment of those functions that could have a negative impact on other functions if not performed correctly. |
| Documentation of facility functions | 4.35. | The interactions and interdependencies of a facility function might be internal or external and might be permanent or temporary. For example, during the development of systems, interaction might be needed between the development and operational environments through the physical transport of new software, data or devices, but these interactions could be removed when the systems are operational. |
| Documentation of facility functions | 4.36. | The operator should consider, when analysing the consequences of an attack directed at one facility function, the possibility that it could be part of an attack affecting multiple facility functions or part of a blended attack (i.e. combined cyber‑attack and physical attack). |
| Documentation of facility functions | 4.37. | The analysis may need to include an iterative assessment of each facility function, whereby one assessment is performed to determine the function’s intrinsic significance, and another is performed to determine the significance on the basis of interactions and interdependencies with other facility functions. The higher of the two levels of significance from these assessments should be used. |
| Documentation of facility functions | 4.38. | Those facility functions that have a direct relationship between the function not being performed correctly and the most severe consequences (e.g. those facility functions closely related to the three fundamental safety functions of controlling criticality, removing heat and containing material [12])24 should be assigned the greatest significance. In these cases, the assignment of significance should not take account of other parameters or factors. |
| Documentation of facility functions | 4.39. | Threat characterization depends on two separate continuous processes, which are interrelated:
|
| Sources of threat information | 4.40. | Paragraph 3.34 of Ref. [2] states: |
| Sources of threat information | 4.41. | The operator should put in place measures to identify, retain and manage specific information25 related to potential cyber‑attacks and adversaries (e.g. phishing emails, malware samples) to allow follow‑up analysis to support threat characterization. The operator should ensure that these measures are implemented in a manner that does not adversely affect nuclear security or nuclear safety. |
| Sources of threat information | 4.42. | The operator’s threat characterization might include elements of threat assessments performed by other organizations (e.g. the operator’s own assessments, open source intelligence reports). |
| Sources of threat information | 4.43. | The relevant competent authority is encouraged to provide an analysis of the specific information captured by the operator in a timely and cooperative manner and to support the exchange of this analysis and other important information, consistent with the State’s requirements for sensitive information [5]. Periodic reporting of incidents to the relevant competent authority by the operator may be valuable as threat analysis, and characterization is a continual activity that demands up to date information. |
| Sources of threat information | 4.44. | During the development of the national threat statement or DBT, the competent authority and other relevant State authorities should have (or should have access to) expertise and knowledge regarding potential computer security incidents (e.g. cyber‑attacks) on nuclear facilities. |
| Sources of threat information | 4.45. | Reference [7] provides guidance on the assessment of cyber threats to a nuclear security regime as well as detailed descriptions of potential sources of attack and associated attack mechanisms relevant to nuclear facilities, and of methodologies used to evaluate and identify threats. |
| Facility specific threat characterization | 4.46. | The operator should develop and maintain a facility specific threat characterization to support the evaluation of computer security risk to the facility. This should include an analysis of the national threat statement or DBT to characterize the specific nuclear security threats to the facility that contribute to the computer security risk. The analysis should describe the potential objectives, capabilities, tactics and techniques of relevant threats, providing the basis for formulating or validating the effectiveness of the facility computer security policy and CSP. |
| Facility specific threat characterization | 4.47. | The operator should perform the threat characterization in the following instances:
|
| Facility specific threat characterization | 4.48. | The threat characterization done by the operator should describe the knowledge, capabilities and funding, as well as the possible campaigns, targets, tactics, techniques and procedures of identified potential adversaries, and any additional attributes of particular relevance. Paragraph 5.19 of Ref. [9] provides a list of possible additional attributes for threat characterization. |
| Facility specific threat characterization | 4.49. | The threat characterization done by the operator should identify potential combinations of tactics and techniques that might be used in an attack, such as coordinated remote and local actions, use of insiders and external adversaries, or blended attacks combining cyber‑attacks and physical attacks. The threat characterization should include the possibility of sequential or parallel cyber‑attacks with cumulative consequences, involving one or several adversaries, as well as cases where there are no indications of collusion between different adversaries (non‑collaborative attacks). |
| Facility specific threat characterization | 4.50. | The threat characterization done by the operator should allow for listing and assessment of credible types of attack. This list will form the basis of the computer security requirements and specification of the DCSA. |
| Facility specific threat characterization | 4.51. | The threat characterization should indicate whether the adversary has the capabilities to carry out a particular type of attack and whether the adversary can compromise a system performing a facility function in such a way that its behaviour is indeterminate (i.e. outside its design basis). |
| Additional considerations for insider threats | 4.52. | The threat characterization should include consideration of insider threats. Specific guidance is provided in Ref. [6]. For computer security, insider threats can be categorized as follows:
|
| Additional considerations for insider threats | 4.53. | Adversary paths and the associated timelines for insider threats differ from other threats owing to insiders’ authorized access. This access allows insiders, for example, to use a non‑continuous series of tasks performed over an extended period of time. For example, the gathering of administrative credentials (through either social engineering or compromise of systems) to defeat measures such as access controls or segregation of duties could take place over several weeks, months or years. |
| Computer security policy and computer security programme | 4.54. | The operator’s computer security policy26 specifies the objectives and high level requirements for computer security of the facility, applying a gradedapproach and defence in depth. These high level requirements are specified by the operator, in compliance with applicable regulatory requirements, and are applicable without exceptions. The computer security policy is an input to facility CSRM, and facility CSRM may expand on and refine the facility computer security policy. |
| Computer security policy and computer security programme | 4.55. | The operator should develop and document its CSP27 as part of facility CSRM. The CSP is a framework for implementation of the facility computer security policy that will be used throughout the lifetime of the facility. The contents of a typical CSP are described in Section 7 and include the set of specific computer security requirements of the facility, in addition to those requirements identified by a risk informed approach. |
| Computer security policy and computer security programme | 4.56. | The operator should define computer security requirements in the CSP for the following, which are described in more detail in Section 7:
|
| Computer security policy and computer security programme | 4.57. | The operator should specify within the CSP those baseline computer security measures that are mandatory for each computer security level. These measures are likely to consist of requirements that represent organizational policies and processes and will translate into procedures. |
| Computer security policy and computer security programme | 4.58. | Requirements for the strength of computer security measures should be identified and defined for each computer security level, consistent with regulatory requirements (if applicable). Exceptions to the application of a specific measure within a computer security level are strongly discouraged, and any such exceptions should be justified and documented within facility CSRM. |
| Computer security policy and computer security programme | 4.59. | The principal outputs from the specification phase of facility CSRM are the documentation of the CSP (or revised CSP) and a compliance report for the competent authority indicating how implementation of the CSP will ensurethat regulatory requirements are met. The CSP documentation may be a single document or a collection of separate documents but should include the following:
|
| Computer security policy and computer security programme | 4.60. | The operator should provide its CSP documentation for review by the competent authority, along with the compliance report. |
| Assignment of systems performing facility functions to computer security levels | 4.61. | Facility CSRM should include or make use of a prioritized list of facility functions, arranged in order of the significance of the facility function, as the basis for the application of a graded approach to provide the highest level of assuranceof protection to those functions that have the highest potential to lead to the most severe consequences. |
| Assignment of systems performing facility functions to computer security levels | 4.62. | The aim of the computer security level approach is to simplify the application of a graded approach. Computer security levels determine which set of computer security requirements are implemented to provide the appropriate level of protection to the system performing a facility function. |
| Assignment of systems performing facility functions to computer security levels | 4.63. | The operator should identify the number of computer security levels to be used, taking account of applicable regulatory requirements. For example, an operator could choose to apply a different computer security level for each facility function. However, the complexity of applying the approach increases with the number of computer security levels. Limiting the number of computer security levels allows for common approaches and methods to be applied to different systems. Therefore, the facility may choose to use a smaller number of levels. The benefit of simplicity in reducing the number of levels should be balanced against the cost in resources and efficiency of applying more stringent measures to facility functions than absolutely necessary in all cases. |
| Assignment of systems performing facility functions to computer security levels | 4.64. | The operator should ensure that each facility function is assigned to a single computer security level. |
| Assignment of systems performing facility functions to computer security levels | 4.65. | In some cases, facility functions important or related to security might not be sufficiently demarcated to allow them to be clearly distinguished from other functions. The inability to separate facility functions from one another increases the complexity in assigning the significance of the facility functions. Facility functions should therefore be distinct and independent from one another to the extent possible. The operator may consider modification of the facility functions with the aim of simplifying the application of the graded approach, which in turn might also be of benefit in applying defence in depth. |
| Assignment of systems performing facility functions to computer security levels | 4.66. | The operator should include the following in the CSP documentation:
|
| Defensive computer security architecture specification | 4.67. | The operator should design and implement a DCSA in which all systems performing facility functions are assigned to a computer security level and protected according to computer security requirements specified for that level. |
| Defensive computer security architecture specification | 4.68. | The operator should specify those baseline computer security measures that are mandatory for each computer security level within the DCSA. These baseline measures may include technical, administrative and physical control measures. |
| Defensive computer security architecture specification | 4.69. | The DCSA should be designed to eliminate or limit the possible routes for cyber‑attack (as identified in the threat characterization) that an adversary could exploit to compromise systems performing facility functions. Similar processes for reducing physical pathways available to the adversary are detailed in Ref. [16]. |
| Defensive computer security architecture specification | 4.70. | Computer security boundaries28 should be established between systems performing facility functions that have different computer security levels. |
| Requirements in the DCSA specification to apply a graded approach | 4.71. | The DCSA specification should express the overall requirements (including the number of computer security levels) and should include the strength of measures for each computer security level, the strength of measures between different computer security levels and the rules for communication between zones at different computer security levels. |
| Requirements in the DCSA specification to apply a graded approach | 4.72. | The DCSA specification should ensure that facility functions with the highest significance are assigned to the most stringent computer security level. Requirements for communications between systems assigned to different facility functions should be defined. Data flow should be controlled between facility functions of different computer security levels in accordance with a risk informed approach. |
| Requirements in the DCSA specification to apply a graded approach | 4.73. | The DCSA specification should ensure that system design complexity is reduced where possible to simplify implementation of computer security measures. Decreasing the complexity of computer security measures can increase both performance and reliability. |
| Requirements in the DCSA specification to apply defence in depth | 4.74. | The DCSA specification should require the application of defence in depth through successive layers29 of computer security measures that have to be overcome or bypassed by an adversary in order to compromise systems performing facility functions. |
| Requirements in the DCSA specification to apply defence in depth | 4.75. | The DCSA specification should require a designed mixture of technical, physical and administrative control measures to provide defence in depth. |
| Requirements in the DCSA specification to apply defence in depth | 4.76. | The DCSA specification should require a design that ensures that a compromise or failure of a single computer security measure does not result in unacceptable consequences. |
| Requirements in the DCSA specification to apply defence in depth | 4.77. | The DCSA specification should require the use of independent and diverse measures to ensure that a common vulnerability cannot allow an adversary to compromise or bypass multiple layers of defence in depth with a single tactic. |
| Requirements in the DCSA specification to apply defence in depth | 4.78. | The DCSA specification should require the application of defence in depth between layers and within each layer. Layers of defence may use a combination of measures applicable to different computer security levels and apply them to different computer security zones. For the most severe consequences (i.e. high radiological consequences due to sabotage or unauthorized removal of Category I nuclear material), computer security measures should be implemented in multiple independent layers with the aim of providing deterministic and fail‑secure30 behaviour of systems in the event of cyber‑attack. |
| Requirements in the DCSA specification to apply defence in depth | 4.79. | The DCSA specification should be supported by an analysis report to identify computer security measures that are fail‑secure and deterministic within the application of defence in depth. This report may be requested by the competent authority to be submitted for review. |
| Defence in depth between layers | 4.80. | The DCSA specification should require each layer of defence in depth to be protected from cyber‑attacks originating in adjacent layers. Layers and their associated computer security measures should prevent or delay advancement of attacks. |
| Defence in depth between layers | 4.81. | The DCSA specification should require that the computer security measures used in a layer be selected and operated in a diverse and independent manner from those computer security measures used in an adjacent layer in order to mitigate common cause failures of protection mechanisms used for isolation between layers. In accordance with the principle of a graded approach, these requirements should be more stringent for those layers requiring the most stringent protection (i.e. computer security levels 1 and 2). |
| Defence in depth within a layer | 4.82. | The DCSA specification should require that a combination of computer security measures be employed within each layer to minimize the potential for a single compromise to overcome or bypass multiple measures. In accordance with the principle of a graded approach, these requirements should be greatest for those layers requiring the most stringent protection (i.e. computer security levels 1 and 2, with level 1 having the highest level of protection). |
| Trust model | 4.83. | The application of a graded approach and defence in depth should be consistent with an applicable trust model. Trust models that may be applied include the following: |
| Trust model | 4.84. | After specification of the computer security requirements, the implementation of those requirements proceeds as illustrated in Fig. 6 (see also Fig. 7). The implementation of requirements demands understanding of the ways in which facility functions are performed by digital assets. |
| Trust model | 4.85. | The risk management processes in facility and system CSRM have significant interactions (see Figs 6 and 7). Facility CSRM includes the assignment of one or more facility functions to individual systems and thus sets the scope for each system’s CSRM, but facility CSRM might also be affected by the outputs of system CSRM in an iterative process. For example, in physical protection systems, multiple facility functions may be assigned to a single system owing to the unavailability of systems with segregated functions. This restricts the ability to segregate the system into separate zones, thereby limiting the zone model to either a physical boundary or a logical boundary. |
| Trust model | 4.86. | For legacy facilities or systems, some structures, systems and components might not be modifiable or alterable. This might mean at the system CSRM phase that some requirements defined in facility CSRM cannot be met, and the operator might need to revise facility CSRM to determine a suitable CSP and DCSA specification that meets the security requirements. |
| Trust model | 4.87. | Facility and system CSRM should be reviewed and might need to be revised in the following instances:
|
| Trust model | 4.88. | The review of both the facility and system CSRM processes needs to be included in the facility change management process to ensure that they are consistent with one another and are kept up to date. These analyses also assist in setting the requirements (e.g. defining the computer security levels) for new systems or implementations. |
| Trust model | 4.89. | Trends in successive iterations of facility and system CSRM should be periodically assessed to identify the following types of adverse pattern:
|
| Trust model | 4.90. | Trends associated with individual systems should be analysed to ensure that the trend has not invalidated the facility CSRM output. For example, system surveillance assessments may be performed continually, and system performance monitoring reports may then be approved periodically. Outputs from the corresponding systems’ CSRM should be reviewed in the facility CSRM process to ensure there is no change in the overall facility risk. |
| Trust model | 4.91. | There are three types of assurance activity:
|
| Evaluation | 4.92. | The operator should evaluate the CSP and the DCSA to verify that their implementation will be effective in reducing the opportunity of adversaries to compromise systems performing facility functions, specifically through the following:
|
| Evaluation | 4.93. | The evaluation of the CSP and the DCSA should include functional and performance testing in a manner that meets regulatory requirements. The evaluation should include consideration, as appropriate, of both facility and system CSRM and of the whole lifetime of the facility. |
| Evaluation | 4.94. | The operator should consider using independent experts to review its CSP and DCSA. |
| Evaluation | 4.95. | The operator should justify all assumptions about the likelihood of attacks or their success (e.g. vulnerability, exposure, opportunity) that are used in evaluation. The likelihood should be assumed to be 1 for postulated scenarios that can result in unacceptable radiological consequences33 or unauthorized removal of nuclear material (i.e. compromise of SDAs). |
| Evaluation | 4.96. | The national threat statement or DBT and the facility specific threat assessment provide the basis by which the operator can conduct an analysis to confirm the assumptions made during the assignment of facility functions to the appropriate computer security level. The use of credible functional scenarios (para. 4.120(a)) may allow for a greater level of assurance in the quality of the assessment (see Annex I for example scenarios). |
| Evaluation | 4.97. | Computer security measures based on the CSP and the DCSA provide detection, delay and response functions through physical (e.g. structure), technical (e.g. firewall) and administrative (e.g. personnel, procedures) control measures. The interaction of these computer security measures with the facility functions important to safety and security, and their assigned systems, makes the evaluation of the CSP’s effectiveness a challenging task. |
| Evaluation | 4.98. | A number of evaluation methods are available, including the following:
|
| Evaluation | 4.99. | Simulation and exercises are typically performed as part of scenario based analysis, in which postulated attacks (scenarios) are specified in detail and simulated or used as a basis for exercises. Scenario based analysis typically builds on attack tree analysis by considering specific adversary tactics and techniques for defeating computer security measures. |
| Evaluation | 4.100. | The effectiveness of the CSP, the DCSA or individual computer security measures can be evaluated quantitatively or qualitatively or both. The competent authority may prescribe deterministic evaluation methods to be used for different types of target, threat and scenario. It is suggested that the overall effectiveness of the CSP and the DCSA be conservatively defined as the lowest effectiveness that still meets regulatory objectives when all adversary tactics and techniques and credible scenarios have been considered. |
| Verification | 4.101. | The objective of verification in this context is to evaluate the quality of outputs from one phase against the specifications before that output is used in a subsequent phase. |
| Verification | 4.102. | Verification should, where possible, occur between successive phases of facility or system CSRM. |
| Verification | 4.103. | The results of verification might lead to the following actions by the operator:
|
| Verification | 4.104. | These verification activities might involve evaluation methods, including exercises, performance testing, simulation or analysis (e.g. vulnerability assessment) (see para. 4.98). |
| Verification | 4.105. | For example, evaluation of outputs based on attack tree analysis includes consideration of the flow of information between systems, devices, networks and locations. The exchange of information between systems can allow adversaries to exploit these pathways, potentially leading to compromise of systems and thereby of facility functions. Attack tree analysis at this stage considers generic pathways with the aim of minimizing or eliminating the possibility of an adversary gaining access to these pathways. |
| Verification | 4.106. | The operator should use a graded approach when determining the level of effort to be applied to verification and validation. The greatest level of effort should be applied to those functions or systems assigned to the most stringent computer security levels (i.e. those requiring the greatest level of protection). |
| Verification | 4.107. | Verification should be repeated on a regular basis (e.g. annually) or as needed to take into account any changes in targets or in the nuclear security programme requirements. |
| Validation | 4.108. | The operator should validate that the systems, when integrated together, have the appropriate level of protection to meet computer security requirements as expressed in the CSP and the DCSA. Figure 7 illustrates the verification and validations activities within the CSRM process, CSP and DCSA. |
| Validation | 4.109. | The operator should validate that the systems, as they are installed at the facility level, have the appropriate level of computer security protection to perform their facility functions to meet requirements as expressed in the facility security requirements. |
| Validation | 4.110. | The operator should validate that the level of computer security protection is sufficient to ensure that the operation of the facility meets regulatory requirements or operator requirements as expressed in the facility security requirements. |
| Validation | 4.111. | Where the validation indicates that the level of protection is not sufficient, the operator should revise its CSP and DCSA to increase protection. The operator may not reduce the level of protection without the agreement of the competent authority. |
| Validation | 4.112. | The operator should validate the outputs of both the facility and system CSRM processes. The facility CSRM outputs should be validated against the operator’s and regulatory requirements. The system CSRM outputs should comply with the CSP and DCSA requirements. |
| Validation | 4.113. | The operator should aggregate facility risk level, including reference to applicable regulatory and design requirements. This should also include the system risk level for each individual system that contains an SDA. |
| Validation | 4.114. | The operator should validate the facility and system level risk assessments against the national threat statement or DBT using scenarios that involve attacks affecting multiple systems and the overall architecture. These scenarios differ from those used in system CSRM (para. 5.5(j)) and those specified in the national threat statement or DBT. They might include blended attacks involving compromise of a number of separate systems with the aim of identifying vulnerabilities somewhere in the facility. |
| Validation | 4.115. | Full validation of the results of both facility CSRM and system CSRM should include consideration of both technical and functional scenarios as described below. |
| Scenario identification and development | 4.116. | The operator should identify and develop scenarios based on the State’s assessment of the threats as detailed in the national threat statement or DBT and, where appropriate, the facility specific threat assessment. Operators are strongly encouraged to include experts in cyber‑attacks and related threat capabilities in the development of these scenarios. This expertise can be found in competent authorities, intelligence services and law enforcement agencies. The operator might be required to provide these detailed scenarios to the competent authority for review and acceptance. |
| Scenario identification and development | 4.117. | Analysis of scenarios might provide insight into the most vulnerable points within the facility, processes, system architectures and procedures. Further analysis might be needed to identify computer security measures already in place or those that need to be added to address the identified vulnerabilities. |
| Scenario identification and development | 4.118. | Scenarios should be used in verifying the results of the facility computer security risk assessment, including the analysis of possible adversary tactics, likelihood of attack and potential consequences. |
| Scenario identification and development | 4.119. | The scenarios should be reassessed periodically to ensure that they remain sufficient to meet security objectives in the light of changes in the threats. |
| Scenario identification and development | 4.120. | There are two categories of scenario:
|
| Scenario identification and development | 4.121. | These scenarios are developed and analysed between facility CSRM and system CSRM phases, and within elements of facility CSRM if needed for analysis. These scenarios are necessary to raise confidence in the outputs of the requirement specification phase but can also be used to develop these requirements. The set of scenarios used for analysis to develop the requirements cannot be identical to the set of scenarios used in assurance activities. |
| Scenario identification and development | 4.122. | Scenarios considered should include multiple attack routes (e.g. via different networks and local systems), attacks involving insiders and blended attacks. They should also include the potential for sequential cyber‑attacks that multiply the consequence but that show no indications of collusion between different adversaries (non‑collaborative attacks). |
| Scenario identification and development | 4.123. | Scenarios can include the following: (a) Stand‑alone attacks by a single adversary; (b) Coordinated attacks by a group of adversaries working together; (c) Opportunistic attacks, in which independent adversaries effectively create a combined attack. For example, a vulnerability is publicly disclosed by one adversary, which allows other adversaries to target the facility systems and equipment; (d) Specific threat capabilities [9]; (e) Blended attacks with coordinated cyber and physical elements; Attack tree analysis can help in identifying threat scenarios as well as in identifying protective strategies. |
| Scenario identification and development | 4.124. | Scenarios should be periodically reviewed and updated in the following instances:
|
| Scenario identification and development | 4.125. | For the most significant scenarios, specific attack vectors and components should be identified and their risks documented. |
| Scenario identification and development | 4.126. | The facility CSP documentation should describe the computer security measures required to maintain protection against adversaries analysed during the assessment. |
| Scenario identification and development | 4.127. | The output of facility CSRM should comprise the facility CSP documentation and a determination of the aggregate facility risk based on an evaluation of the effectiveness of those measures identified in the CSP as providing protection against adversaries described in the national threat statement or DBT. |
| Scenario identification and development | 4.128. | The facility CSRM report should include a high level review and analysis of security system design and configuration management as detailed in the CSP. A more detailed analysis should be performed during system CSRM. |
| Scenario identification and development | 4.129. | Facility functions and their corresponding systems in the facility CSRM output should be addressed in comprehensive system level risk assessments as described in Section 5. |
| Scenario identification and development | 4.130. | The operator’s assessment of risk associated with different functions and aggregate facility risk should be provided to the competent authority. |
| Scenario identification and development | 5.1. | The operator should establish a systematic and periodically reviewed process for managing the computer security risk to digital assets, including SDAs, within the systems that perform the facility functions identified in the facility CSRM process35. Compromise of SDAs typically has the potential to lead to very high, high or medium severity consequences (as described in Ref. [7]). Facility CSRM should include system CSRM for each system, as described in this section. System CSRM should consider all digital assets in the system, including SDAs. |
| Scenario identification and development | 5.2. | System CSRM should be performed by a multidisciplinary team similar to that for facility CSRM. However, the composition of the system CSRM team may be tailored to address specific considerations associated with each system. |
| Scenario identification and development | 5.3. | The operator should use a graded approach when determining the level of effort to be applied to risk management for each system. The greatest level of effort should be applied to those systems that perform or support the facility functions assigned to the most stringent computer security levels (i.e. those requiring the greatest level of protection) as determined in the facility CSRM process. |
| Scenario identification and development | 5.4. | The primary objective of the system CSRM is to evaluate and manage the computer security measures to ensure that they provide the appropriate level of protection for the specific system (i.e. that required for its computer security level) according to the requirements defined in the facility CSRM output. |
| Scenario identification and development | 5.5. | To meet this objective, system CSRM includes the following steps:
|
| Scenario identification and development | 5.6. | This process may result in the identification of other digital assets that were not part of the systems assigned to facility functions during facility CSRM, or were identified as being outside a system or zone boundary during system CSRM. In such cases, additional analysis should be performed to ensure the inclusion of all associated digital assets in the assessment and the CSP. |
| Scenario identification and development | 5.7. | The outputs of system CSRM should include the prioritization of risks within the system to determine the appropriate implementation of computer security measures. The process should include consideration of the location of the components that make up the system, vulnerabilities, and computer security levels and zones if defined, as well the significance of SDAs and other digital assets within the system under assessment. |
| Scenario identification and development | 5.8. | The operator should perform system CSRM in the following instances:
|
| Scenario identification and development | 5.9. | The following inputs should be identified and made available for use during system CSRM:
|
| Overall defensive computer security architecture requirements for computer security | 5.10. | The operator should use the requirements for the DCSA set out during facility CSRM to design, implement and maintain computer security measures for systems and digital assets to prevent, detect, delay, mitigate and recover from cyber‑attacks. |
| Overall defensive computer security architecture requirements for computer security | 5.11. | Computer security measures should be effective throughout the lifetime of the facility, for example during periods of maintenance and decommissioning, when significant configuration changes may be made. Monitoring, maintenance and recovery activities should not provide means by which an adversary might bypass computer security measures, for example bypassing the protection on communication pathways between facility functions that have different computer security levels. |
| Overall defensive computer security architecture requirements for computer security | 5.12. | Computer security boundaries36 should be applied between computer security zones and should be protected using different computer security measures. |
| Overall defensive computer security architecture requirements for computer security | 5.13. | Data flow should be controlled between zones of different computer security levels and between zones of the same computer security level, using a risk informed approach, to ensure that the DCSA remains effective. |
| Definition of system boundaries | 5.14. | The system boundary defines the scope for each system’s CSRM and encompasses the systems identified as providing a particular facility function on the basis of the facility characterization. This should include considerations of interdependencies between facility functions and their systems. |
| Definition of system boundaries | 5.15. | System CSRM should include identifying and documenting the system boundaries. These include all the components, subcomponents, interfaces and environments of the system in question during all stages in the lifetime of the facility, as well as those other systems that provide support or auxiliary functions. |
| Definition of system boundaries | 5.16. | The following steps can be used to define the boundaries of the system under assessment:
|
| Definition and construction of computer security zones | 5.17. | The CSP and DCSA specifications produced during facility CSRM place computer security requirements on the implementation of the zone model. The CSP will also include a list of facility functions and the systems assigned to them. |
| Definition and construction of computer security zones | 5.18. | The operator should implement computer security measures to meet the requirements set out in the DCSA specification. In doing so, consideration should also be given to achieving the following [8]:
|
| Definition and construction of computer security zones | 5.19. | Digital assets should be considered for separation into distinct zones when any of the following conditions are met:
|
| Definition and construction of computer security zones | 5.20. | Digital assets may be considered for assignment to different zones, despite being assigned the same computer security level, in the following cases:
|
| Definition and construction of computer security zones | 5.21. | Network connections and local exchanges (e.g. via removable media or mobile devices) of data between systems in different zones should be limited to only those that are essential. Where network connections across zone borders are essential, they should be established from the zone with the higher computer security level to the zone with the lower computer security level. Restrictions can be applied using technical control measures (e.g. filtering devices) or administrative control measures (e.g. rules for the use of removable media on a specific system).Network connections and methods that are permitted for disconnected exchange of data should be documented. |
| Definition and construction of computer security zones | 5.22. | A specific zone can only include systems (and digital assets) of the same computer security level. The zone is assigned the computer security level of the systems within the zone. A given computer security level can and should apply to different zones. However, in some specific cases it might be difficult to separate systems assigned to different computer security levels into different zones. In such cases, some systems could become part of a zone assigned a more stringent computer security level than they need. |
| Definition and construction of computer security zones | 5.23. | Communications should be allowed only between zones of the same computer security level or adjacent levels. Communications between zones with different computer security levels should be limited to specific zone entry points (e.g. one entry point filtering connections between zones with computer security level 2 and zones with computer security level 3). Security measures for all entry points should be defined in an efficient and consistent manner to enforce a secure overall architecture. Specific checks should be applied at a zone entry point, for example on the content of data (e.g. acceptable ranges of parameter values) entering or leaving, or the data’s digital signature. Zone entry points should also have specific event log monitoring. |
| Identification of digital assets | 5.24. | The following records should be consulted when identifying a system’s digital assets:
|
| Identification of digital assets | 5.25. | The list of digital assets may include their identifiers, key technical specifications and data, descriptions of their interfaces, references to facility level and system level risk assessments, and their assigned owners. |
| Identification of digital assets | 5.26. | The list of digital assets should be maintained during the lifetime of the facility and periodically reviewed. The list should also be reviewed and updated if necessary whenever a system level risk assessment is performed. |
| Identification of digital assets | 5.27. | Digital assets that are also sensitive information assets should be designated as SDAs. Digital assets that might facilitate or contribute to an adverse effect on the function of SDAs should also be identified and considered in the digital asset analysis to determine, consistent with the CSP, whether they should be designated as SDAs. |
| Identification of digital assets | 5.28. | The list of SDAs should be classified and protected as sensitive information. |
| System computer security architecture, including digital asset analysis | 5.29. | The operator should identify key tasks and activities necessary to provide computer security for the facility. These tasks and activities should be associated with computer security levels and their corresponding computer security measures. The operator should ensure that the necessary resources and capabilities are available to perform those tasks and activities. |
| System computer security architecture, including digital asset analysis | 5.30. | The system CSRM process should identify all SDAs. Digital assets that are not SDAs may also need to be considered in the analysis of specific threats or types of attack if their compromise could adversely affect an SDA. The level of effort associated with the system level risk assessment should be graded to ensure that those systems assigned the highest computer security level are also subject to the most robust assessment. |
| System computer security architecture, including digital asset analysis | 5.31. | In general, systems that perform the same facility function should be assigned the same computer security level, including independent, diverse and redundant systems. The assignment of a less stringent computer security level to any such systems is strongly discouraged and may be considered only on a case by case basis if supported by a specific justification and security risk analysis. |
| System computer security architecture, including digital asset analysis | 5.32. | Asset analysis of SDAs should include consideration of information about the hardware, firmware and software of the SDA, which can be used as input to a vulnerability analysis. The vulnerability analysis may lead to a recommendation to perform procedures to identify, disable or remove unneeded services, ports or interfaces on the system (or network) of the SDA to reduce attack surface (i.e. system hardening; see para. A.64). |
| System computer security architecture, including digital asset analysis | 5.33. | The interfaces of each system (including its digital assets) should be analysed and categorized with respect to the zone boundary. The following categories may be used:
|
| System computer security architecture, including digital asset analysis | 5.34. | All digital assets with trusted internal communication pathways within a zone should be assigned to the same computer security level, namely that of the zone. |
| System computer security architecture, including digital asset analysis | 5.35. | Zone boundary devices should be assigned to a computer security level equivalent to the highest (most stringent) level applied to the equipment for which they are intended to provide protection. For example, a firewall between two zones of different computer security levels may have a trusted internal communication pathway with the zone assigned the higher computer security level but only an authorized external communication pathway with the other zone. |
| System computer security architecture, including digital asset analysis | 5.36. | Another example of a zone boundary device may be a malware detection kiosk, or antivirus scanner, which is used to scan removable media and mobile devices before entering and exiting a zone. This kiosk would be assigned the highest computer security level applied to anything in the zone for which it is intended to provide protection.38 In this case, the operator needs to ensure that the kiosk does not provide a common route for the compromise of different systems in different zones (e.g. by providing a common vulnerability that can be exploited to compromise different systems). |
| System computer security architecture, including digital asset analysis | 5.37. | All digital assets, including SDAs, that are connected via a trusted internal communication pathway should comply with the overall DCSA requirements. Permitted external communications need additional computer security measures (see para. 5.33(b)). |
| System computer security architecture, including digital asset analysis | 5.38. | SDAs may be allowed to be in proximity (logical or physical) to other SDAs provided that computer security measures are in place to ensure that these systems cannot interact though potential unauthorized communication pathways. These measures might be solely administrative control measures. Typically, SDAs are assigned to the higher computer security levels (e.g. levels 1–3). |
| System computer security architecture, including digital asset analysis | 5.39. | Digital assets that are not authorized to communicate with SDAs should not be allowed to be in logical or physical proximity to SDAs where there is the potential to have unauthorized communication pathways. The DCSA should provide for the design and maintenance of robust computer security measures to eliminate such pathways or create compensatory measures to reduce the potential for them to be used. |
| System computer security architecture, including digital asset analysis | 5.40. | Unassigned digital assets (i.e. those not assigned to a computer security level) should never be in proximity to SDAs. For example, a vendor’s equipment or personal mobile devices that have not been evaluated and assigned should be treated as potentially malicious devices to SDAs and should not be allowed in logical or physical proximity to facility SDAs. |
| System computer security architecture, including digital asset analysis | 5.41. | Asset analysis should include assessing the effects of credible scenarios of cyber‑attack on the system and the risk to the facility. The assessment should take account of the possibility that cyber‑attacks might occur during any stage of the lifetime of the facility or any phase of the system’s life cycle. |
| System computer security architecture, including digital asset analysis | 5.42. | Cyber‑attacks might affect an individual system or multiple systems and could be used in combination with other forms of malicious act causing physical damage. These potential specific component level interactions should be listed within the assessment report and assessed. |
| System computer security architecture, including digital asset analysis | 5.43. | The assessment should include consideration of malicious actions that could change process signals, equipment configuration data or software. |
| System computer security architecture, including digital asset analysis | 5.44. | The asset analysis should include identifying the locations at which information is stored and the pathways by which information flows within the system (including its digital assets). The analysis should also identify and justify the measures in place to protect the necessary data flows and communications and to identify any possible remaining vulnerabilities. The analysis could be supported by the following:
|
| System computer security architecture, including digital asset analysis | 5.45. | For example, consider the exchange of software (e.g. source code, object code) between a development environment and a security system. If no computer security measures are in place, then the compiler (hardware and software) will be assigned to the same zone (and computer security level) as the security system itself, since no boundary exists. However, if security measures are applied at the boundary between the compiler and the system — for example, testing the integrity of data and identifying any vulnerabilities in the code coming from the compiler — the compiler could be placed in a separate zone and assigned to a computer security level different from that of the system itself. The measures applied to the compiler’s output are accredited with protecting the system and so would be assigned the same level as the system to which they are providing the protection. |
| System computer security architecture, including digital asset analysis | 5.46. | The analysis of digital assets should produce a list and description of the specific computer security measures that are implemented for each system. The measures should be a combination of technical, administrative and physical control measures. |
| System computer security architecture, including digital asset analysis | 5.47. | The analysis of digital assets should provide a qualitative or quantitative value of the acceptable risk threshold. |
| Verification of the system computer security risk assessment | 5.48. | The operator should verify and validate the system computer security risk assessment for each system as defined by the scope of the assessment. The verification of system CSRM outputs may use the evaluation methods outlined in para. 4.98 for facility CSRM. |
| System scenario identification and development | 5.49. | The national threat statement or DBT provides a basis for the generation of credible scenarios based on the motivation, capabilities, intentions and opportunity of potential adversaries (including adversaries using cyber techniques). |
| System scenario identification and development | 5.50. | The operator should develop credible scenarios for each system on the basis of the threat characterization as a basis for the validation of the computer security measures that provide protection to the system. The credible scenarios should include potential sequences of adversary actions that might result in compromise of SDAs. |
| System scenario identification and development | 5.51. | Scenarios should include common attack routes and techniques. These may include the following:
|
| System scenario identification and development | 5.52. | Scenarios should be developed consistent with the national threat statement or DBT that applies to the facility to identify those SDAs that might be exposed to such attacks. It may be beneficial to start scenario development by considering the most likely or the highest consequence cases. |
| System scenario identification and development | 5.53. | The development of scenarios should have the following aims (in order of significance):
|
| System scenario identification and development | 5.54. | Evaluation methods para. 4.98) should use credible scenarios (paras 4.116–4.125) to verify the effectiveness of implemented computer security measures. |
| System scenario identification and development | 5.55. | The operator should verify that digital assets, including SDAs, are appropriately protected against the adversaries identified in the national threat statement or DBT that applies to the facility. |
| System computer security risk management report | 5.56. | The output of system CSRM should be documented in a report that includes the following:
|
| System computer security risk management report | 5.57. | The system CSRM report should be classified as sensitive information and protected accordingly. |
| System computer security risk management report | 6.1. | This section provides guidance specific to the different stages in the lifetime of a facility. |
| System computer security risk management report | 6.2. | The operator should review its plans for the facility against the regulations of the competent authority and identify issues that need to be addressed to meet regulatory requirements. |
| System computer security risk management report | 6.3. | The operator should ensure that it has a formalized methodology to perform a detailed facility CSRM process. |
| System computer security risk management report | 6.4. | The operator should develop the facility CSRM process as described in Section 4. |
| System computer security risk management report | 6.5. | The operator should verify that, provided that the DCSA specification can be met, the residual risk will not exceed the acceptable levels. |
| System computer security risk management report | 6.6. | The operator should plan the development of the competencies needed to support computer security during all stages in the lifetime of the facility. |
| System computer security risk management report | 6.7. | The planning stage may include activities in locations away from the intended facility site. The operator should apply computer security measures to the information used in these activities, and to other inputs to and outputs from the planning life cycle, that is sensitive information or makes use of sensitive information assets. |
| System computer security risk management report | 6.8. | The operator should include computer security considerations in the siting stage of the facility, because some activities supporting computer security can only be performed in relation to the specific site, not remotely or generically (e.g. establishment of isolated networks, access for computer incident response teams, identification of the availability of expertise in computer security in the local workforce). |
| System computer security risk management report | 6.9. | In its siting plans for the location of major equipment, the operator should take into account the need to allow for operation of physical control measures that will be necessary to complement computer security measures. |
| System computer security risk management report | 6.10. | In siting, the operator should consider the availability of local infrastructure to support computer security measures (e.g. emergency communications networks). |
| System computer security risk management report | 6.11. | The operator should use the output of the facility CSRM work conducted during the planning stage to ensure that the facility design process provides for computer security requirements for facility functions (expressed in the DCSA and the CSP) to be met as an integral part of the system engineering activities for the facility. This applies to the design of a new facility or to the modification of the design for refurbishment or modification of the facility during the operation stage of the facility. |
| System computer security risk management report | 6.12. | The design process should take into account computer security requirements that arise owing to the dependencies between facility functions, as identified during the facility CSRM process. |
| System computer security risk management report | 6.13. | Computer security requirements should be provided in sufficient detail to allow design decisions to be made, the design to be verified and design changes to be evaluated. |
| System computer security risk management report | 6.14. | The operator should perform system CSRM for each system, including verification at each step of the design of the computer security measures. |
| System computer security risk management report | 6.15. | Physical and remote accessibility of the SDAs within vital areas by an insider should be considered at the design stage. |
| System computer security risk management report | 6.16. | The operator should develop computer security validation criteria for the commissioning stage. Systems performing facility functions assigned the highest computer security levels should be independently validated. |
| System computer security risk management report | 6.17. | Staff knowledgeable in computer security from different parts of the operating organization should be involved in the design process to ensure the following:
|
| System computer security risk management report | 6.18. | The design should include the necessary directions for implementation of the computer security requirements. Design information, such as analysis reports, should be retained so that it is available in the future to authorized users of the design. |
| System computer security risk management report | 6.19. | Because design documents might contain sensitive information related to computer security, all design documents should be classified according to the information classification scheme and protected accordingly. |
| System computer security risk management report | 6.20. | The operator should ensure that any computer security requirements that need to be followed by vendors, contractors and suppliers are specified in their contracts39 [19]. Vendors, contractors and suppliers should be required to have computer security management systems and secure engineering environments in place and to apply security by design to the SDAs that they produce or supply. |
| System computer security risk management report | 6.21. | The operator should ensure that physical, administrative and technical control measures are established during the construction process to maintain the preventive and protective measures required by the CSP and the DCSA. For example, if lockable doors are to be installed on an enclosure, the locks should be installed and placed under control before installing SDAs within the enclosure, or appropriate compensatory measures should be put in place. |
| System computer security risk management report | 6.22. | The operator should ensure that the following computer security actions are performed as required by the CSP and the DCSA during the construction stage:
|
| System computer security risk management report | 6.23. | The operator should include the testing of computer security measures in its acceptance testing for the delivery of systems to the facility from the system provider. |
| System computer security risk management report | 6.24. | The operator should perform configuration and testing activities during system and DCSA integration (see Fig. 7) to meet computer security requirements. For example, the following activities should be performed:
|
| System computer security risk management report | 6.25. | The operator should perform validation testing of the computer security measures. Validation of computer security measures and physical protection measures should be conducted jointly to ensure appropriate integration. |
| System computer security risk management report | 6.26. | If there is a conflict between safety measures and security measures, then the measures to ensure safety should be maintained and the operator should find a solution that also meets computer security requirements. Until such a solution is in place, compensatory computer security measures should be implemented to reduce the risk to an acceptable level and should be supported by a comprehensive justification and security risk analysis. The compensatory measures should not rely solely on administrative control measures for an extended period. The absence of a security solution should never be accepted. |
| System computer security risk management report | 6.27. | Review and approval of applicable CSP documents and supporting materials (required for system operation) should be completed prior to operation. |
| System computer security risk management report | 6.28. | The operator should assign continuing responsibility for design change, management, maintenance and operations of the entire CSP to an individual (supported as necessary by others with appropriate skills and knowledge). |
| System computer security risk management report | 6.29. | The operator should maintain documentation that describes how computer security measures are implemented, in compliance with the CSP, the DCSA and any externally imposed requirements. |
| System computer security risk management report | 6.30. | The operator should ensure that operational requirements are consistent with the computer security level of systems and digital assets. For example, the following might need to be considered:
|
| System computer security risk management report | 6.31. | Actions applied to systems as part of a vulnerability assessment might lead to plant or process instability and should therefore only be considered using test beds or spare systems, during factory acceptance tests or during long planned outages. |
| Maintenance | 6.32. | This section applies to short duration maintenance activities that are routinely performed during the operations stage. Extended maintenance (e.g. refurbishment, replacement of systems, repair) is addressed in the design, construction and cessation of operations stages. |
| Maintenance | 6.33. | The operator should ensure that maintenance activities are performed in a manner consistent with the computer security level of the system or digital assetbeing maintained. For example, in addition to the general considerations during operation listed in para. 6.30, the following steps should be taken:
|
| Maintenance | 6.34. | Systems might be at greater risk during maintenance, when computer security measures might be removed or disabled. Furthermore, there may be additional access routes during maintenance, for example arising from the need to enable remote maintenance interfaces or the use of removable media to configure or upgrade software. |
| Maintenance | 6.35. | The operator should put adequate compensatory measures in place when the normal computer security measures are removed or disabled. Examples include the following:
|
| Maintenance | 6.36. | During the cessation of operations stage, large scale modifications may be conducted in parallel, affecting multiple systems. |
| Maintenance | 6.37. | The operator should consider applying compensatory measures to address any risk arising from modifications to or degradation of security systems resulting from environmental or structural changes. This may include placing a greater reliance on administrative control measures and on vendors, contractors and suppliers to implement such measures. |
| Maintenance | 6.38. | Examples of changes for which compensatory measures may be applied include the following:
|
| Maintenance | 6.39. | When digital assets are decommissioned, the effect of this decommissioning (including any loss of integration with other digital assets outside the facility) on computer security should be evaluated and documented. If decommissioning of a system or digital asset reduces the effectiveness of computer security measures, the operator should put compensatory measures in place. |
| Maintenance | 6.40. | As the set of facility functions changes, the digital assets supporting these functions may be reassigned to a different computer security level or be unassigned. This might lead to a need to modify computer security measures for those digital assets. |
| Maintenance | 6.41. | The operator should ensure the secure destruction of any digital assets containing sensitive information that cannot be securely declassified when they are decommissioned. |
| Maintenance | 7.1. | The computer security policy and programme should provide the basis for computer security requirements defined by the results of facility and system CSRM (Sections 4 and 5, respectively) and in consideration of the specific stages in the lifetime of the facility (Section 6). |
| Maintenance | 7.2. | Computer security at nuclear facilities should be recognized by senior management and managers as a cross‑cutting discipline that needs specialized knowledge, expertise and skills. |
| Maintenance | 7.3. | Senior management has overall responsibility for computer security at a nuclear facility and needs awareness and understanding of the cyber threat and the potential adverse effect of a cyber‑attack on nuclear security. |
| Maintenance | 7.4. | Senior management should ensure that all the operator’s interactions with others and all internal processes are consistent with legal and regulatory requirements related to information and computer security. |
| Maintenance | 7.5. | Managers should promulgate the beliefs and values of nuclear security culture as they pertain to computer security. This includes promoting recognition that a credible threat exists from adversaries with cyber skills, and that these adversaries (including insider threats) might target nuclear facilities via a cyber‑attack or a blended attack. |
| Computer security policy | 7.6. | A computer security policy sets the high level computer security goals of an organization. The computer security policy should begin with a clear statement of why it is being established and should define the issue being addressed, as well as the goals and the consequences if the policy is not followed. The policy should be consistent with the State’s computer security policy and appropriate regulatory requirements. The policy should be enforceable and achievable and should include indicators that can be measured and audited. |
| Computer security policy | 7.7. | The operator’s computer security policy should take into account the results of facility CSRM (see Section 4). The computer security policy shouldrequire the protection of digital assets, including SDAs, against compromise from cyber‑attacks. Individual policy clauses should be clear and concise in identifying these requirements. Implementation of the requirements is addressed in detail in the CSP. |
| Computer security policy | 7.8. | The computer security policy should be endorsed and enforced by senior management. It should identify the organization or individual that owns the policy and the CSP. |
| Computer security policy | 7.9. | The computer security policy should be part of the overall facility security policy and should be coordinated with other relevant security responsibilities. When establishing a computer security policy, its effect on legal aspects and human resources also needs to be considered. |
| Computer security policy | 7.10. | The computer security policy may identify potential penalties and disciplinary actions against personnel not complying with the policy requirements. |
| Computer security policy | 7.11. | The computer security policy should be reflected in the CSP and through other lower level CSP elements that support implementation of computer security. |
| Computer security policy | 7.12. | The policy needs to set out clear indicators that will be used to demonstrate that policies are being met in all aspects and that each aspect is being performed satisfactorily. |
| Computer security programme | 7.13. | The CSP contains details of how the goals set out in the computer security policy are achieved. The CSP establishes the organizational roles, responsibilities, processes and procedures for implementing the computer security policy. A CSP may be specific to a facility (including its associated buildings and equipment) or an organization (including all its sites and organizational units). |
| Computer security programme | 7.14. | The CSP should be developed, exercised and maintained within the framework of the facility’s overall security plan. |
| Computer security programme | 7.15. | The CSP should take account of the results of facility CSRM (Section 4). Development of the CSP may include personnel involved in computer security, physical protection, safety, operations and information technology (IT). The CSP is illustrated schematically in Fig. 8. |
| Computer security programme | 7.16. | The CSP should be reviewed and updated (a) periodically to reflect developments in technology and threats and (b) in the event of computer security incidents or other nuclear security events. |
| Elements of the computer security programme | 7.17. | Reference [7] describes the elements of a CSP generally applicable for organizations within the nuclear security regime. Paragraphs 7.18–7.20 provide more specific details on elements of a CSP for nuclear facilities. |
| Elements of the computer security programme | 7.18. | The elements of the CSP should include addressing system vulnerabilities, applying computer security measures, performing risk analysis and conducting assurance activities to achieve an acceptable level of computer security risk. |
| Elements of the computer security programme | 7.19. | The elements of the CSP should be adapted and applied to the different stages in the lifetime of a facility and to different phases of the individual systems’ life cycles. Specific details of implementation in these different cases should be provided in the CSP. |
| Elements of the computer security programme | 7.20. | The operator should tailor the CSP to its facility, but it is suggested that as a minimum the following areas be included:
|
| Elements of the computer security programme | 7.21. | Further information on CSP elements can be found in international standards [19–21]. |
| Elements of the computer security programme | 7.22. | The operator should define computer security related roles and responsibilities within the organization. |
| Elements of the computer security programme | 7.23. | Managers should ensure that all staff understand who within the organization is responsible for leading the CSP in the functional areas relevant to their work. Staff with computer security responsibilities need to be trained in the elements of and requirements specified in the CSP. |
| Elements of the computer security programme | 7.24. | The management of computer security should be integrated into the existing management system for the facility (see paras 7.30–7.34) to the extent possible and practicable. For existing facilities, the management system will already include well defined roles and responsibilities, and these should be adjusted to incorporate computer security. |
| Elements of the computer security programme | 7.25. | Personnel with significant computer security responsibilities should not have conflicts of interest with other functions of the organization or with otherduties. Managers should put in place policies and processes to avoid or mitigate any potential conflicts. |
| Elements of the computer security programme | 7.26. | The operator should ensure that individuals or organizations performing key assessment and verification activities are appropriately qualified and independent. |
| Elements of the computer security programme | 7.27. | Computer security needs cooperation between staff in different roles and organizational units. The operator should put in place a formalized framework with the aim of ensuring interdisciplinary cooperation. |
| Elements of the computer security programme | 7.28. | The operator needs to identify the external and internal interfaces involved in the CSP. This includes the following:
|
| Elements of the computer security programme | 7.29. | The operator should manage risk through a formalized process (i.e. facility and system CSRM) that assesses and manages risk and vulnerabilities at the facility. The operator should use the results of these processes within its management system. |
| Management system | 7.30. | The management system should be integrated to include computer security, physical protection, safety, health, environmental, quality and financial elements. |
| Management system | 7.31. | The management system should have formal and established interfaces with the facility and system CSRM. |
| Management system | 7.32. | The computer and information security goals should be defined and managed within the management system in a manner similar to other business objectives. |
| Management system | 7.33. | The management system should be reviewed to ensure its completeness and compliance with facility security policies. It should be periodically reviewed and adapted to changing conditions in the facility and in the environment. Figure 3 of Ref. [22] illustrates the continual improvement process for management systems. |
| Management system | 7.34. | The elements of the CSP (including facility and system CSRM) should be reviewed and the necessary provisions for computer security should be integrated into the management system. |
| Computer security indicators | 7.35. | Computer security indicators can be an effective tool for security managers to measure the maturity of the management system; the risk associated with potential cyber‑attacks affecting SDAs; the effectiveness of different components of their security programmes; the security of a specific system, product or process; and the ability of staff within the organization to address security issues for which they are responsible. |
| Computer security indicators | 7.36. | Indicators should support decisions concerning acceptable risk and provide an input to a risk registry. |
| Computer security indicators | 7.37. | An analysis should be performed to identify parameters and establish indicators that support effective management of the CSP. Indicators that may be useful include mean time to recover (from cyber‑attack), number of computer security incidents, number of restorations of SDAs (potential reoccurrences), security backlogs and vulnerability tracking information (e.g. common scoring system, mitigation effectiveness, control deployment time, patch deployment). |
| Computer security indicators | 7.38. | The use of the indicators should be integrated into the organization’s management system. |
| Computer security indicators | 7.39. | Facility and system security design is specified in facility and system CSRM (see Sections 4 and 5, respectively). One practical implementation of these outputs, namely the DCSA and measures assigned to computer security levels, is described in Section 8. |
| Computer security requirements | 7.40. | Modifications to the facility or system should be analysed to determine potential effects on security before changes are made to allow for risks to be managed. |
| Computer security requirements | 7.41. | Computer security should be considered as a factor when determining the design inputs, which include the following:
|
| Computer security requirements | 7.42. | The operator should, for each digital asset, document attributes that have significance for computer security. These attributes may include the following:
|
| Computer security requirements | 7.43. | Digital asset management should take into account the equipment status of technical control measures that use digital technology. Computer security operations and physical protection operations may have joint responsibility for integrated security measures, systems and procedures. Joint operational control may include control over physical devices used to protect computer equipment (e.g. rooms, doors, keys, locks, cameras, motion sensors, tamper indicators). |
| Configuration management | 7.44. | The goal of configuration management is to have detailed, up to date records of the installed software and hardware components and how they are configured. Configuration management should include information needed for the following:
|
| Configuration management | 7.45. | Configuration management includes the change management process. Computer security should be included in this process such that all changes are evaluated from a computer security perspective before implementation. For example, appropriate reviews are performed and documented before carrying out procedures that could bypass, change or reduce the effectiveness of the computer security measures in place. Personnel changes may also necessitate changes relating to computer security (e.g. credential cancellation and management). |
| Configuration management | 7.46. | The operator should develop security procedures to support facility and system computer security design and management. During development of these procedures, the operator should consider the two person rule or segregation ofwork duties, taking into account the appropriate trust model and the security level assigned to the zone(s) applicable to the procedure. |
| Configuration management | 7.47. | Procedures that provide detailed instructions on how to disable or bypass computer security measures should ensure that such activities are recorded and logged. The procedure may also provide instructions for the application of alternate or compensatory computer security measures when the baseline computer security measure is disabled. |
| Configuration management | 7.48. | These procedures may be new stand‑alone procedures or may be integrated within existing procedures that meet one or more safety, security or organizational objectives. |
| Configuration management | 7.49. | Personnel management includes the necessary provisions for establishing an appropriate level of trustworthiness, enforcing confidentiality undertakings, defining required competencies and, where necessary, applying penalties or terminating employment. |
| Configuration management | 7.50. | Computer security activities and personnel related security activities should be coordinated to provide protection against insider threats. In particular, personnel with key security responsibilities (e.g. system administrators, security team) may require a higher level of trustworthiness. Further guidance on the protection from insider threats is given in Ref. [6]. |
| Configuration management | 7.51. | The CSP should include provision of training and awareness raising to develop and maintain personnel and organizational competencies and qualifications that are necessary for computer security. |
| Configuration management | 8.1. | An example of the implementation of DCSA with five different computer security levels in a nuclear power plant is presented below. This is one possible implementation of the graded approach; the exact choice of levels, DCSA and computer security measures should be tailored according to the facility and its environment through specific analysis. |
| Configuration management | 8.2. | When implementing the DCSA, the operator should consider limiting the dynamic elements of networks and individual systems to make their behaviour more predictable. This increased predictability might help in the implementation of effective computer security measures. |
| Configuration management | 8.3. | Zones assigned the most stringent computer security level should only be connected to zones assigned lower levels of security by fail‑secure, deterministic, unidirectional data communication pathways. The direction of these data pathways should be from the zone with the more stringent computer security level to the zone with the less stringent computer security level.41 Exceptions are strongly discouraged and may only be considered on a strict case by case basis and if supported by a complete justification and security risk analysis.42 |
| Configuration management | 8.4. | Digital devices or communications used for monitoring, maintenance and recovery should not bypass computer security measures used to protect communication pathways between devices that have different computer security levels. |
| Configuration management | 8.5. | Systems assigned to the most stringent computer security level should be placed within the most secure zone’s boundaries.43 |
| Configuration management | 8.6. | Data communications between systems within the facility and the emergency centre (either on the site or off the site) should be protected by computer security measures. |
| Configuration management | 8.7. | Computer security measures that ensure the logical and physical decoupling of zones are based on the requirements of the zones’ computer security levels. To maintain defence in depth, a direct path connecting through several zones should not be allowed. |
| Configuration management | 8.8. | Technical control measures that provide security at the boundaries of zones should be designed to be resilient to cyber‑attack and to provide alerts in the event of potential compromise or malicious activity. |
| Configuration management | 8.9. | Where external connectivity is provided, security should be applied using the graded approach. The provision of external connectivity should meet the requirements for protecting the confidentiality, integrity and availability of sensitive information consistent with the computer security level assigned to the zone. |
| Configuration management | 8.10. | Appropriate access restrictions (including monitoring of access) should be applied to provide protection based on the graded approach because these external connections can serve as a route for compromise of systems at the facility. |
| Configuration management | 8.11. | Examples of externally accessible systems include the following:
|
| Configuration management | 8.12. | Figure 9 gives an example of one implementation of a DCSA, showing levels, zones, systems and digital assets. This is based on the guidance provided in Section 3. |
| Configuration management | 8.13. | Example security requirements applied within each computer security level are presented in paras 8.16–8.21. The exact choice of levels and their security requirements should be tailored according to the facility and its environment through specific analysis. |
| Configuration management | 8.14. | Two types of unassigned digital asset may be encountered:
|
| Configuration management | 8.15. | The operator may place restrictions on unassigned assets until they can be assessed and assigned to the appropriate computer security level and the required computer security measures can be put in place. Devices that are unassigned, for example, should not be brought into the proximity of systems that have medium to very high computer security levels. |
| Configuration management | 8.16. | For applicable systems and levels, the following generic requirements are applied:
|
| Configuration management | 8.17. | In addition to the generic requirements, requirements for preventive and protective measures are used for systems that are vital to the facility and require the highest level of security (e.g. reactor protection systems). These requirements can include the following:
|
| Configuration management | 8.18. | In addition to the generic requirements, requirements for preventive and protective measures should be used for systems, such as operational control systems, that require a high level of security. These requirements can include the following:
|
| Configuration management | 8.19. | In addition to the generic requirements, requirements for preventive and protective measures should be used for real time systems that are not required for operations (e.g. process supervision systems in a control room), if all such systems have a medium severity level for various cyber threats. These requirements can include the following:
|
| Configuration management | 8.20. | In addition to the generic requirements, requirements for computer security measures should be applied to technical data management systems that are used for maintenance or operation activity management related to components or systems required by the technical specification for operation (e.g. work permit, work order, tag out, documentation management), if such systems need medium levels of computer security. These requirements can include the following:
|
| Configuration management | 8.21. | Requirements specifying computer security measures should be used for systems not directly important to technical control or operational purposes (e.g. office automation systems), if such systems need low levels of computer security. These requirements can include the following:
|
| Configuration management | A.1. | This appendix provides examples of selected CSP elements for use withthe performance based approach to computer security. An operator may needto modify these elements to reflect particular organizational or facility specificcircumstances, but the examples cover all the types of information that theoperator needs to develop and implement an effective CSP. |
| Configuration management | A.2. | The operator should require these or similar elements to facilitateunderstanding between organizational units, vendors, contractors and suppliers,and competent authorities. The elements may need to be tailored to the specificcharacteristics of the operating organization and facility to improve understanding. |
| Management | A.3. | Senior management at a facility establishes a computer security policyas well as processes and support mechanisms to ensure that the policy isimplemented. To achieve this, senior management should take the following steps:
|
| Computer security specialist | A.4. | The operator should assign overall responsibility for computer security at the facility to one individual or group. In this publication, the title ‘computer security specialist’ is used to define that role.46 |
| Computer security specialist | A.5. | The computer security specialist should coordinate closely with activities throughout the facility, but in an independent manner. The computer security specialist should have clear and accessible reporting lines directly to senior management, as computer security can affect almost all facility activities. |
| Computer security specialist | A.6. | Computer security responsibilities within different organizational departments should be clearly defined and coordinated to avoid gaps or conflicts and to ensure that computer security is implemented in a coherent manner. This is especially necessary if the computer security specialist role is assigned to a group rather than to one individual: the computer security specialist should constitute one single authority within the operating organization, responsible for addressing organization‑wide issues and resolving any conflicts that might arise. |
| Computer security specialist | A.7. | The computer security specialist should have in‑depth knowledge of computer security and good knowledge of other aspects of security in nuclear facilities, as well as knowledge of nuclear safety and project management and the ability to integrate people from different disciplines into an effective team. |
| Computer security specialist | A.8. | The computer security specialist should have the authority and responsibility for administering the CSP. |
| Computer security specialist | A.9. | The typical specific responsibilities of the computer security specialist include the following:
|
| Computer security team | A.10. | The operator should identify and assign personnel to a computer security team. This team can be a fixed group of individuals or can include individuals with specific expertise as needed. The team supports the computer security specialist in fulfilling their responsibilities: the computer security specialist needs to have access to expertise in all disciplines associated with computer security, including facility safety and plant operations as well as physical protection and personnel related security. |
| Computer security team | A.11. | Members of the computer security team should be responsible for advocating computer security in their respective organizational units. |
| Computer security team | A.12. | The computer security team’s activities include actively monitoring digital assets, including SDAs, for any indications of a possible cyber‑attack, and coordinating response to computer security incidents. This might include staffing a security operations centre for the monitoring and assessment of potential computer security incidents and for the initiation and support of response activities, which might also need support from other organizations. |
| Other management responsibilities | A.13. | Managers at different levels within the organization should ensure that appropriate attention is paid to computer security within their areas of responsibility. Typical responsibilities of managers in their respective areas include the following:
|
| Individual responsibilities | A.14. | Each individual within an organization should be responsible for performing their own tasks consistently with the CSP. Specific responsibilities include the following:
|
| Cross‑department responsibilities | A.15. | Computer security is a cross‑cutting discipline that affects and is affected by many different organizational units and activities. Computer security needs close coordination and cooperation between different organizational units to be effective. Paragraphs A.16–A.38 describe some of the departmental responsibilities and cross‑cutting issues. |
| Physical protection | A.16. | The site security plan and the CSP are both essential in developing a comprehensive security plan for the facility, and they therefore need to complement each other. SDAs are protected by physical access control requirements, and compromise of computer based systems can lead to degradation or loss of physical protection functions. Furthermore, adversaries might seek to attack a facility through coordinated cyber‑attack and physical attack (i.e. blended attack). |
| Physical protection | A.17. | If the organizational units responsible for the site security plan and the CSP are different, they should communicate and coordinate their efforts to ensure consistency between the plans during the development and review process. |
| Physical protection | A.18. | The operator should assign relevant roles and responsibilities in the development, implementation and maintenance of the CSP to physical protection personnel. These may include the following:
|
| Information technology | A.19. | IT personnel perform support, management and administrative tasks within a nuclear facility. These tasks may include activities involving digital assets used to prepare and store operational and maintenance procedures, work instructions, configuration management systems, design documents and operating manuals. |
| Information technology | A.20. | The CSP should clearly identify the digital assets and associated networks that are the responsibility of the IT personnel. IT personnel should monitor the identified digital assets and associated networks and report any computer security incidents to senior management and the computer security specialist according to the incident response plan. |
| Information technology | A.21. | IT personnel should take actions to ensure that computer security incidents involving digital assets (but not SDAs) and networks do not propagate to affect SDAs. |
| Engineering | A.22. | Engineering personnel should have formal processes to ensure coordination with other relevant organizational units to ensure that measures for nuclear security and nuclear safety are designed and implemented in an integrated manner consistent with the requirements set out in the CSP. Engineering personnel should recognize that safety, physical protection and computer security are distinct disciplines that need support from appropriately qualified experts in those different disciplines. |
| Engineering | A.23. | Engineering personnel should provide evidence of the effectiveness of the computer security architecture (i.e. the DCSA) that can be compared with the results expected on the basis of facility and system CSRM. |
| Engineering | A.24. | Engineering personnel should lead or support the system CSRM process for those facility systems of which they are the owner. |
| Engineering | A.25. | Engineering personnel should provide direction to vendors, contractors and suppliers regarding requirements for computer security within facility systems. Engineering personnel are responsible for reviewing vendors’ designs to ensure that they meet the computer security requirements. Engineering personnel should seek confirmation from the vendor that products supplied to the facility have been developed in a secure environment. Engineering personnel should establish and follow a procedure for reviewing technical product documentation, accepting on‑site product consignments and testing products to ensure that computer security requirements are met. |
| Engineering | A.26. | Engineering personnel should ensure that performance monitoring activities are in place to confirm that computer security measures continue to be effective. |
| Operations | A.27. | The CSP should identify those facility systems and networks that are the responsibility of operations personnel. Operations personnel are responsible for complying with the requirements for these systems set out in the CSP. |
| Operations | A.28. | Operations personnel should ensure that the DCSA and computer security measures under their responsibility are maintained and remain effective. |
| Operations | A.29. | Operations personnel should ensure that procedures are in place for identifying computer security incidents and initiating response for systems and networks under their responsibility. |
| Operations | A.30. | Operations personnel should promote situational awareness to ensure that only authorized removable media and mobile devices are used within the facility. |
| Procurement and supply chain organization | A.31. | Products should be procured to meet the specifications for the equipment, device or component. The specifications should include appropriate computer security requirements. |
| Procurement and supply chain organization | A.32. | Procurement processes should include checks to ensure that SDAs developed or supplied by vendors and suppliers include computer security measures consistent with each SDA's assigned computer security level. |
| Procurement and supply chain organization | A.33. | Procurement personnel should understand the importance of specific computer security requirements in procurement. These requirements should be enforced through legal agreements with vendors, contractors and suppliers, such as licences or contracts. |
| Procurement and supply chain organization | A.34. | Procurement and engineering personnel might not know that a general purpose device will be classified as an SDA if the operator uses it in a particular application. In such cases, the devices should be procured taking into account the possibility that they might be deployed as SDAs, and appropriate computer security requirements should be applied. |
| Procurement and supply chain organization | A.35. | Procurement personnel should work with engineering personnel to ensure that computer security requirements are specified as contractual requirements for vendors, contractors or suppliers and that designs submitted by vendors, contractors or suppliers meet computer security requirements. Procurement personnel should also inform engineering personnel if support from a vendor, contractor or supplier for an SDA is, or appears likely to be, no longer available. |
| Procurement and supply chain organization | A.36. | Procurement personnel should consider conducting reviews of vendors, contractors and suppliers before entering into contractual agreements. Such reviews may include analysis of the processes used by the vendor, contractor or supplier to design, develop, test, implement or support SDAs or assessment of the vendor, contractor or supplier’s training and experience in developing SDAs with the required levels of computer security. The reviews may also help (a) determine whether primary vendors, contractors or suppliers have in place security measures to properly evaluate the trustworthiness of subordinate vendors, contractors and suppliers and (b) ensure the provenance of SDAs, SDA components, and software and updates provided to the operator. |
| Procurement and supply chain organization | A.37. | Procurement personnel should ensure that all vendors, contractors and suppliers of SDAs have procedures in place to notify the operator in case of any supply chain incidents with the potential to affect SDAs (e.g. compromise of SDA components, SDA technology, development processes or sensitive information). |
| Procurement and supply chain organization | A.38. | Procurement personnel should consider ensuring that vendors, contractors and suppliers of SDAs have a trusted distribution route for delivering SDAs, SDA components, and software and updates to the operator. |
| External relationships and interfaces for risk management | A.39. | Risk management processes should include analysis of external relationships (i.e. vendors, contractors and suppliers). Responsibility and accountability for meeting requirements derived from system CSRM should be specified in contractual arrangements. |
| External relationships and interfaces for risk management | A.40. | The operator should audit and inspect relevant activities of vendors, contractors and suppliers to ensure that computer security requirements set out in the CSP are being met. Contracts with vendors, contractors and suppliers should require them to allow the operator to perform these activities. |
| External relationships and interfaces for risk management | A.41. | The operator’s risk management processes should take account of regulatory requirements and other external requirements affecting computer security. The operator should provide for relevant competent authorities to maintain oversight and perform inspections in respect of measures to meet these requirements. |
| Computer security assurance | A.42. | Computer security assurance activities should be conducted throughout the lifetime of the facility, as described in Sections 4 and 5. The specific assurance activities will vary according to the stage in the lifetime. Reference [8] provides details of assurance activities applicable to I&C systems. |
| Computer security assurance | A.43. | Such activities by an operator might include assessments (including audits), reviews, exercises and testing47. |
| Computer security assurance | A.44. | The operator should verify that the CSP is consistent with the operator’s computer security policy (e.g. computer security assessment may be used to verify that computer security requirements reflecting the operator’s policy are met). This may involve a number of complementary assessments to evaluate different elements of the CSP and their implementation. The outputs of the assessments will include identification of deficiencies and good practices, and suggestions for improvement. |
| Computer security assurance | A.45. | These activities should form the basis for continual improvement of the CSP. To support this, assurance activities should be repeatable and reliable, and should be conducted on a periodic basis, as well as whenever a computer security incident occurs or the threat changes. |
| Computer security assurance | A.46. | Assurance activities should include the evaluation of organizational effectiveness and the measures in place to ensure correct implementation and effectiveness of computer security. |
| Computer security assurance | A.47. | Assurance activities may be performed by internal or external groups: for example, computer security assessment can be performed by an internal team as a self‑assessment activity. If the assessment is performed by external groups, the results need to be verified internally. |
| Computer security assurance | A.48. | Internal and external assurance activities should be complemented by independent evaluations performed by external parties. Independent assessors will need access to relevant staff, documentation and equipment. Independent assessors may be members of the operating organization or external to the organization, but they need to be independent of the people who performed, verified and supervised the work being assessed. |
| Computer security assurance | A.49. | The trustworthiness of independent or external assessors should be determined before they are permitted access to the information or facility, as the assurance activities are likely to involve sensitive computer security information. Further information on trustworthiness assessments is given in Ref. [6]. |
| Computer security assurance | A.50. | The procedures for independent assessment should include appropriate restrictions on the removal, use, storage and distribution of sensitive information and should provide for the destruction of such information when it is no longer needed. |
| Computer security assurance | A.51. | The capabilities to conduct assurance activities should be developed and maintained to keep pace with changes in technology and the cyber threat. These capabilities are needed by both the staff performing the assurance activities and the competent authority, which might need to review the results of these activities. |
| Assessment scope | A.52. | The operator should identify the scope of the assessment in terms of the functional and security domains. |
| Assessment scope | A.53. | The scope should be appropriate to the stage of the lifetime of the facility. For example, a complete assessment of computer security might be needed during some stages, whereas in other stages, assessment of specific functional or security domains might be more appropriate. (Reference [8] identifies assessment activities at various points in the I&C system life cycle.) |
| Assessment evaluation techniques | A.54. | An assessment team should use the following techniques, as appropriate, to acquire the information the team needs to develop its conclusions and recommendations:
|
| Assessment report development | A.55. | The data collection component of the assessment consists of recording observations and data of interest from the review of documents and records, interviews with staff, and direct observations. Observations might be individually significant but might also act as a collective indicator of trends at the facility or organization that might need to be addressed. Therefore, the operator should identify observations that support findings indicating trends or recurring issues. |
| Assessment report development | A.56. | The observations should be analysed by comparison with requirements such as national regulations, organizational procedures and industry standards, as appropriate. A finding is identified if there is non‑compliance with a regulatory requirement or internal procedure. The basis used for identifying findings needs to be well defined and agreed in the planning stage of the assessment. |
| Assessment report development | A.57. | Observations do not always result in findings, and not all findings are negative: they may include identification of good practices, organizational practices or procedures that provide an effective, typically novel, method for meeting security objectives. Good practices for potential adoption by other organizations to improve their own computer security may be identified and reported. |
| Assessment report development | A.58. | In addition to findings and good practices, the assessment team may also provide recommendations and suggestions in the assessment report associated with the findings. |
| Assessment report development | A.59. | Recommendations provide guidelines for meeting legal and regulatory requirements or international norms (e.g. convention obligations) when appropriate. Recommendations do not normally include how to correct a problem, but rather only identify that a problem needs to be corrected. |
| Assessment report development | A.60. | Suggestions provide an additional level of information regarding a finding, including suggested corrective or mitigatory measures. Such information is not necessarily derived from regulatory guidance, but more typically from industry technical standards and good practice. |
| Example assessment method | A.61. | An example assessment method is described in Ref. [23]. The example method provides a cross‑domain assessment of a facility’s functional operations and its computer security. This assists in ensuring coverage of processes and systems that perform facility functions, including operations, safety, security, and emergency preparedness and response. |
| Example assessment method | A.62. | Computer security measures that protect SDAs should be managed under a configuration management plan. Such a plan should be developed and implemented by the operator and should include the following measures:
|
| Example assessment method | A.63. | A current baseline configuration of SDAs should be maintained under configuration control. The baseline configuration should be updated as necessary on the basis of system performance monitoring and, for example, to reflect system hardening or the effects of modifications on computer security. |
| Example assessment method | A.64. | The operator should consider putting in place a systematic process for system hardening of SDAs. System hardening is the application of a combination of administrative and technical control measures designed to make computer system components less vulnerable to cyber‑attack by removing or disabling hardware and software components that are not needed for the operation or maintenance of the system. Hardware and software typically removed or disabled include the following:
|
| Example assessment method | A.65. | System hardening should be mandatory for SDAs that use commercial ‘off the shelf’ components, the functionality of which should be reduced to that needed to perform the SDAs’ facility functions (or system functions). |
| Example assessment method | A.66. | System hardening should aim to reduce the amount of data that need to be monitored and analysed to determine the security of the protected digital asset or system. System hardening can also help the operator better understand the normal behaviour and functionality of the system. |
| Example assessment method | A.67. | System hardening may include the use of technology to ensure that only the approved versions of authorized computer programs are allowed to run on the SDA. The records of system hardening should include documentation of the libraries that the technology has used. |
| Example assessment method | A.68. | System hardening should use only secure, trusted update mechanisms. These update mechanisms should be assessed to ensure that they eliminate or minimize the potential for the update to be used as a route to attack the system being updated by, for example, ensuring that system updates are identified by encrypted signatures of authorized vendors. |
| Example assessment method | A.69. | Vendors issue computer security updates, typically in the form of ‘patches’, to address vulnerabilities identified in their systems. Since modifications to safety systems need to follow resource intensive procedures, the immediate installation of a patch might not be possible, leaving the system at risk for some period of time. |
| Example assessment method | A.70. | The operator should obtain from the vendor or develop itself a list of software components used in the systems and the applicable software updates (including security patches). |
| Example assessment method | A.71. | The operator should have a formal process in place to ensure that computer security updates to equipment and components are assessed to determine their applicability and effect and, specifically, whether immediate installation is necessary to mitigate the associated vulnerability. The operator should either install the update or provide effective compensatory measures appropriate to protect against exploitation of the vulnerability. |
| Example assessment method | A.72. | The operator should identify and implement computer security measures that provide robust security to allow for the assessment of updates and associated vulnerabilities without those vulnerabilities being exploited during the period of assessment and installation. For example, system hardening could reduce the number of security updates that need to be assessed and installed, as updates that affect only functionality that has been removed or disabled need not be installed. |
| Example assessment method | A.73. | All systems covered by the CSP should be assigned an owner (e.g. a system engineer) who is responsible for monitoring the system. |
| Example assessment method | A.74. | System monitoring should include monitoring the status and effectiveness of computer security measures. |
| Example assessment method | A.75. | The system owner should be responsible for ensuring that recovery media and configuration information are up to date and that system recovery plans are maintained and can be executed when necessary (e.g. through regular exercise of the recovery plan). |
| Example assessment method | A.76. | Configuration changes to an SDA should be controlled with explicit consideration for security consequence analyses. The manager or asset owner should approve any configuration changes to an SDA prior to implementation of the changes. This approval should be formally documented. |
| Example assessment method | A.77. | Activities associated with change to the configuration of an SDA should be reviewed by the computer security specialist. Records of changes to the configuration of an SDA should be prepared, retained and reviewed. |
| Example assessment method | A.78. | The computer security specialist should have overall responsibility for oversight of configuration change control activities involving SDAs but may delegate this responsibility to asset owners. The computer security specialist should put in place requirements to ensure that effective oversight is performed and coordinated. |
| Example assessment method | A.79. | The continual monitoring of the effectiveness of a CSP in practice should include the evaluation of CSP components through exercises. |
| Example assessment method | A.80. | Exercises for information and computer security can combine assessment with training. Exercises should also include scenarios involving blended attacks incorporating coordinated cyber‑attack and physical attack. |
| Example assessment method | A.81. | The information and computer security management system may be exercised in a graded way for personnel with different roles and at different levels within the organization. Exercises test how effectively work processes and communications function in responding to a computer security incident; they also provide training for all levels of personnel involved in the management and response. |
| Example assessment method | A.82. | The operator should consider the benefit of the following:
|
| Example assessment method | A.83. | The operator should consider whether to perform intrusive testing (simulating a real cyber‑attack on real systems) as part of the evaluation of a system’s or a digital asset’s computer security, taking account of legal, safety and security considerations and of the operator’s capability to avoid or remediate any adverse effects caused to the digital asset and system. Reference [8] identifies specific restrictions on intrusive testing of I&C systems. |
| Example assessment method | A.84. | Since the detailed method of a cyber‑attack will be strongly dependent on the exact configuration of the systems attacked, a system being tested needs to be as similar to the real system as possible. Full backup and restore procedures should be in place to return the system to a known stable state if an assessment test creates abnormal conditions. |
| Example assessment method | A.85. | A test plan should specify the schedule and budget for testing and identify the goals of the testing, the expected deliverables, the hardware and software to be used, the resources needed, the rules of engagement and a recovery procedure. |
| Example assessment method | A.86. | Testing techniques may include the following:
|
| Example assessment method | A.87. | Computer security indicators can provide a common basis for evaluating vulnerabilities. Well chosen and commonly agreed indicators (e.g. a common scoring system for vulnerabilities) provide a common basis for comparing vulnerabilities across different systems. The operator should assess possible ways in which identified vulnerabilities could be exploited and take measures to prevent such exploitation. The operator should consider reporting all vulnerabilities for inclusion in a national vulnerability database. |
| Example assessment method | A.88. | Computer security staff should be responsible for reporting any suspected computer security incidents according to the incident response plan. The operator should consider providing specialized awareness training for personnel in key roles not directly related to computer security but that could be affected by failures in computer security. |
| Example assessment method | A.89. | The operator should have a contingency plan to detect and respond to computer security incidents that might potentially affect SDAs (and for any other nuclear security events that involve computer security incidents). The plan should provide procedures to identify the location and nature of the threat, prevent or mitigate the consequences of any malicious act, notify relevant competent authorities, and recover from the event. |
| Example assessment method | A.90. | Incident response is a collection of activities (see Fig. 10), each of which should be considered. |
| Example assessment method | A.91. | Computer security incidents can involve compromise of the confidentiality, integrity and/or availability of the data processed, stored or transmitted by a computer based system. A computer security incident might also involve violation of an explicit or implied computer security policy, an acceptable use policy or a standard computer security practice. Some adverse events (e.g. floods, fires, electrical outages, excessive heat) can cause a system outage but are not the result of malicious acts and therefore are not considered to be computer security incidents. |
| Example assessment method | A.92. | A computer security incident might become an information security incident or breach if it involves the actual or suspected compromise of sensitive information. Reference [5] provides examples of potentially sensitive information associated with nuclear facilities. |
| Example assessment method | A.93. | The operator should create a local computer security incident response team, which is responsible for responding to computer security incidents within the organization. The size, composition and capabilities of a computer security incident response team will depend on the nature of the organization and its computing infrastructure, but it should include personnel with expertise in nuclear security, nuclear safety, and emergency preparedness and response as well as computer security. The computer security incident response team may have the same membership as, or some members in common with, the computer security team. |
| Example assessment method | A.94. | A computer emergency response team is a technical authority that provides assistance and response capabilities when a computer security incident occurs. The computer emergency response team may exist at different levels (e.g. national, local, industrial sector). The computer emergency response team may be available to supplement the internal computer security response capabilities of an operating organization in responding to any computer security incident. The availability of this team to respond during times of crisis should be considered in planning the operating organization’s response activities. |
| Example assessment method | A.95. | The operator should ensure the participation in exercises of any computer emergency response team members who would be involved in response as well as the computer security incident response team members. Interfaces between the computer emergency response team and the computer security incident response team, including preparatory activities (e.g. preclearance of computer emergency response team members for access to identified areas of the facility) should be considered. Exercises should be designed to test the key communication items between the competent authorities, the computer emergency response team, the computer security incident response team and site operations, as shown in Fig. 11. |
| Preparation | A.96. | Planning actions in the preparation phase include establishing a policy that will guide the operational processes for responding to computer security incidents, defining the roles and responsibilities of all parties involved in the incident response, drafting procedures consistent with the policy, and identifying assets available for response. Requirements and criteria for use in responding to computer security incidents need to be clearly defined. The plan of response actions should be approved by senior management. |
| Detection and analysis | A.97. | During the detection and analysis phase, the computer security incident response team should be responsible for the technical characterization of the incident. Detection activities include ensuring that there is adequate data monitoring in place to support detection through the collection and preservation of information related to possible incidents. The computer security incident response team may use a dedicated testing and evaluation environment to analyse incidents without affecting operational systems or disturbing potential forensic evidence. |
| Detection and analysis | A.98. | Analysis activities may extend beyond the computer security incident response team and the initial technical characterization of the incident, and some aspects of the analysis may require extensive resources. Typical priorities for analysis include the following:
|
| Mitigation (containment, eradication and recovery) | A.99. | Mitigation actions aim to contain a computer security incident; eradicate any malware or correct any mal‑operation or altered configuration from the affected systems; and recover system function and data integrity, using compensatory measures where necessary. Even if the compromised components or systems do not perform a critical safety or security function, they need to be checked and cleared to prevent propagation of the attack to a component or system that does perform such a function. Mitigation activities continue and are adapted as information is collected and analysed during the detection and analysis phase. |
| Mitigation (containment, eradication and recovery) | A.100. | When planning how to contain computer security incidents, the operator should recognize that a number of components or systems may be identified during the incident investigation as having been compromised. If any of the compromised components or systems provide a critical safety or security function — such as contributing to the protection of SDAs, the safe operation of the facility or the protection of nuclear or other radioactive material — it will be necessary to implement compensatory measures to perform that function until the component or system can be brought back into operation. |
| Mitigation (containment, eradication and recovery) | A.101. | Recovery measures may include like‑for‑like replacement (e.g. a backup firewall); isolation of safety structures, systems and components from the compromised component or system; or temporary measures, such as a guard to control access to the relevant part of the facility to replace a digital access control system. Recovery measures need to replace the function, not necessarily the compromised component or system. |
| Post‑incident activities | A.102. | The last phase of response is post‑incident activities to implement measures that will prevent the recurrence of similar types of computer security incident in the future, enable their rapid detection and/or minimize their consequences. This phase may include learning lessons within the organization and sharing intelligence on threats and lessons learned, as appropriate, with the wider computer security incident response community to help prevent a similar attack from succeeding elsewhere. Post‑incident findings may allow the development of new security measures to prevent re‑infection and provide information to update threat and vulnerability profiles. Other post‑incident activities may include evaluating the effectiveness of the CSP and identifying training to address any gaps in the response of personnel, as well as assessing the resources that were needed to address the computer security incident as a guide to planning for future incidents. |
| Reporting | A.103. | During the response to a computer security incident there may be situations in which reporting to competent authorities (or other organizations) is required or desirable. Reporting allows everyone who needs to know about a computer security incident to be informed in a timely manner. Since those responding to the incident are likely to be busy, the operator needs to consider carefully the frequency of reporting and the level of detail provided. The operator may consider assigning a specific individual as the point of contact for computer security incident reporting and for requests for information from outside organizations. |
| Reporting | A.104. | Activity planning should ensure that the computer security requirements for the performance and verification of the activities are identified and planned. |
| Reporting | A.105. | Required personnel and contractor qualifications related to computer security should be identified for the activities being performed, and this should be taken into account in the planning. Each responsible organization has the responsibility to report suspected computer security incidents according to the incident response plan. |
| Reporting | A.106. | When developing work instructions, computer security requirements need to be taken into account. These could include instructions for the following:
|
| Reporting | A.107. | Maintenance instructions should include instructions for configuring the security settings on devices. |
| Reporting | A.108. | If maintenance requires the disposition of equipment that is no longer required, this equipment should be sanitized or securely destroyed. |
| Reporting | A.109. | Procurement requirements related to computer security should be identified and implemented in the work plan. |
| Reporting | A.110. | Although computers are used in many aspects of work and personal life, there is a general lack of awareness and knowledge regarding the technology, cyber threats, computer security measures and the possible effects of compromise. Awareness raising and training in computer security are needed for all personnel and contractors in organizations that have nuclear security responsibilities. |
| Reporting | A.111. | Human error causes or adversely contributes to computer security incidents. Staff at all levels need awareness and constant reaffirmation of computer security. |
| Reporting | A.112. | Awareness of its importance can support computer security as follows:
|
| Reporting | A.113. | The following indicators may be used to evaluate awareness of computer security in an organization:
|
| Reporting | A.114. | The aim of a computer security training programme is to ensure that personnel and contractors have the knowledge and capability to perform their work in accordance with the facility computer security requirements and procedures. Computer security training should be incorporated into an existing training management system. |
| Reporting | A.115. | The operator should have a training programme with the following elements:
|
| Reporting | A.116. | A variety of training approaches should be used, such as e‑learning, classroom training, practical exercises and discussion forums48. External organizations, including the IAEA, can provide materials to support such activities. |
| Reporting | A.117. | The training programme should include (a) indicators for evaluating computer security awareness and the effectiveness of training and (b) processes for continual improvement and periodic refresher and update training for staff, as needed. |
| Reporting | A.118. | An example process for planning response to computer security incidents can be found in Ref. [25].
|
| Reporting | I–1. | This annex provides some examples of ways in which adversaries could exploit vulnerabilities in systems performing critical facility functions. However, these are only examples, and operators need to think creatively about computer security to imagine how adversaries might act and how computer security measures might counter their actions. |
| Reporting | I–2. | The examples are derived from discussions with experts from Member States. They are not intended to provide an exhaustive list of possibilities or a recipe for attacking nuclear facilities, but rather a starting point for facility operators and Member States to develop plans to address the dynamic, rapidly changing cyber threat environment. |
| Reporting | I–3. | A coordinated cyber‑attack might consist of several phases:
|
| Reporting | I–4. | Adversaries will use some or all of these tactics, and they need to be considered when developing cyber threat profiles specific to nuclear facility instrumentation and control (I&C) systems and other sensitive digital assets (SDAs). The example scenarios presented in this annex include the use of these tactics and illustrate common types of attack suggested by computer security experts with experience of the nuclear industry. |
| Reporting | I–5. | Types of threat are described in Ref. [I–1]. |
| Reporting | I–6. | Goal of the attack: To gain access to nuclear information and digital assets by exploiting a trusted path used by vendors to provide support. |
| Reporting | I–7. | Description: The attack is initially directed at the Internet based remote access portal through which vendors have access to sensitive information and facility SDAs to provide support. The adversary compromises the portal and, via privilege escalation, gains administrative control over the database and changes the email address associated with a specific vendor. This vendor has remote access to critical operational information about the facility and some of the SDAs. The adversary uses the ‘forgotten password’ function on the portal, which sends a password refresh link to the email address introduced by the adversary. The adversary uses this link to change the vendor’s password and logs in to the portal with the identity of the authorized vendor. Once logged in, the adversary has access to all the information on the portal and all the SDAs to which the vendor has access. The adversary then begins to modify the settings and operational parameters of SDAs, leading to operational instability and ultimately to the shutdown of the facility. |
| Reporting | I–8. | Goal of the attack: To gain access to internal SDAs and systems. |
| Reporting | I–9. | Description:
|
| Reporting | I–10. | Goal of the attack: To force the shutdown of a nuclear power plant. |
| Reporting | I–11. | Description:
|
| Reporting | I–12. | Goal of the attack: To obtain enough information to plan an accurate attack on plant operations. |
| Reporting | I–13. | Description:
|
| Reporting | I–14. | Goal of the attack: To obtain, through social engineering, information from a facility security officer that can be used to further an attack. |
| Reporting | I–15. | Description:
|
| Reporting | [I–1] INTERNATIONAL ATOMIC ENERGY AGENCY, Computer Security for Nuclear Security, IAEA Nuclear Security Series No. 42‑G, IAEA, Vienna (2021). | |
| Reporting | II–1. | The assignment of computer security levels to systems (or zones containing systems) is based on the potential consequences of an attack on each system for the safety, security and operation of the facility: the less tolerable the consequences, the more stringent the computer security level. |
| Reporting | II–2. | To avoid case by case analyses of every system and potential consequence, criteria can be established to facilitate the assignment of the computer security levels. |
| Reporting | II–3. | One fundamental consideration is the safety classification of the system. However, there is not an automatic connection between computer security levels and safety classes. A stringent computer security level is needed for a system important to safety, but a stringent level may also be needed for systems with no safety classification if they have a critical role in preventing severe potential consequences for security. |
| Reporting | II–4. | An example graded approach to computer security levels uses the following high level criteria:
|
| Reporting | II–5. | In addition to these high level criteria, the definition of the computer security levels can include a list of typical facility functions or types of system that are specific to each level. This list could simplify the assignment of computer security levels to systems. |
| Reporting | II–6. | The computer security level classification focuses on potential consequences related to compromise of computer based systems (see Ref. [II–2]). In many cases, information acquired or calculated by a digital system can also be obtained with analogue tools or by a person, in which case the computer security level can be less stringent (and therefore less restrictive for normal operations). |
| Reporting | II–7. | When several diverse digital assets are used for the same function, a primary system supporting the function needs to be chosen and assigned to a computer security level according to the consequences of its compromise. |
| Reporting | [II–1] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Nuclear Power Plants — Instrumentation and Control Important to Safety — General Requirements for Systems, IEC 61513:2011, IEC, Geneva (2011). | |
| Reporting | III–1. | This annex provides an example of the application of computer security levels and zones. Table III–1 provides a list of systems used in this example and shows the mapping of computer security levels to the physical and logical zones used in this example. |
| Reporting | III–2. | For simple systems, consisting of a small number of assets in well defined physical locations, application of the computer security levels and physical and logical zones is straightforward. It is more complicated for complex systems that extend throughout the facility or for physical areas that contain systems that need to be assigned to multiple security levels, such as the main control room. |
| Reporting | III–3. | Typically, the main control room contains controls for many different categories of systems that have differing security requirements (e.g. safety systems, steam supply (boiler), electrical systems, auxiliary systems, IT systems). The human–machine interfaces for all facility systems are entirely or partly in the main control room. These systems and human–machine interfaces typically use digital assets to perform their functions. |
| Reporting | III–4. | In old facilities, this creates difficulties in the application of computer security for several reasons:
|
| Reporting | III–5. | The following illustrative example is provided to explain potential computer security solutions for the issues described above, in terms of the concepts detailed in Fig. 1 of the main text. |
| Reporting | III–6. | Application of computer security zones to the main control room (along with the physical protection and fire protection systems) is difficult because of the need for centralized monitoring and management of facility functions. The computer security zone concept allows for physical and/or logical boundaries, which can help to address these limitations. The relationship is illustrated in Fig. III–1. |
| Reporting | III–7. | The main control room (and the rooms within the protected area containing electronic equipment) is assumed to be classified and protected as a vital area. This implies that sabotage of equipment within the main control room could ultimately result in unacceptable radiological consequences. |
| Reporting | III–8. | Table III–1 provides an example of a subset of systems that need monitoring, communications or operation from within the main control room. |
| Reactor protection system (computer security level 1) | III–9. | In Table III–1, the most stringent computer security level (level 1) has a requirement that both the logical and physical computer security zone boundaries be specified strictly and that these boundaries do not extend past each other. For example, the dedicated network can be constrained to locations within the vital area (or equivalent). |
| Reactor protection system (computer security level 1) | III–10. | Physical and logical access to zones assigned computer security level 1 needs to be strictly controlled. Physical access can be controlled using a robust barrier with access control and intrusion detection to meet the requirements recommended in Ref. [III–1], and logical access can be controlled through a fail‑secure, unidirectional data communication pathway (e.g. a data diode) in accordance with the guidance in this publication and Ref. [III–2]. |
| Reactor protection system (computer security level 1) | III–11. | Typically, systems that perform the facility function of preventing accident conditions (e.g. those on a reactor protection system) will be assigned to the most stringent computer security level. The equipment providing the function will be located in a vital area close to the reactor, but the equipment will be monitored through a human–machine interface in the main control room. This creates a potential problem with applying computer security zones, since the interconnection between the reactor protection system and the human–machine interface might be routed outside the vital areas (e.g. in the protected area), which would violate the physical security requirement. |
| Reactor protection system (computer security level 1) | III–12. | One solution would be to separate the monitoring function from the function of preventing accident conditions. This would allow for logical separation by means of a data diode between the digital assets in the vital area preventing accident conditions and those outside the vital area used for monitoring in the main control room. This solution would only be effective if the function of preventing accident conditions was independent and did not need any action or information from outside the systems assigned to perform the function. |
| Reactor protection system (computer security level 1) | III–13. | The digital assets credited with preventing accident conditions will be assigned to the most stringent computer security level (level 1) on the basis of facility function. These digital assets will be located in a vital area outside the main control room. The digital assets credited with monitoring the reactor protection system (e.g. the reactor protection system human–machine interface console in the main control room) will be assigned security level 2 (or higher). |